Professional Documents
Culture Documents
Bhaskar Dasgupta
dasgupta@iitk.ac.in
An Applied Mathematics course for
graduate and senior undergraduate students
and also for
rising researchers.
1,
2,
Contents I
Preliminary Background
Matrices and Linear Transformations
Operational Fundamentals of Linear Algebra
Systems of Linear Equations
Gauss Elimination Family of Methods
Special Systems and Special Methods
Numerical Aspects in Linear Systems
3,
Contents II
Eigenvalues and Eigenvectors
Diagonalization and Similarity Transformations
Jacobi and Givens Rotation Methods
Householder Transformation and Tridiagonal Matrices
QR Decomposition Method
Eigenvalue Problem of General Matrices
Singular Value Decomposition
Vector Spaces: Fundamental Concepts*
4,
Contents III
Topics in Multivariate Calculus
Vector Analysis: Curves and Surfaces
Scalar and Vector Fields
Polynomial Equations
Solution of Nonlinear Equations and Systems
Optimization: Introduction
Multivariate Optimization
Methods of Nonlinear Optimization*
5,
Contents IV
Constrained Optimization
Linear and Quadratic Programming Problems*
Interpolation and Approximation
Basic Methods of Numerical Integration
Advanced Topics in Numerical Integration*
Numerical Solution of Ordinary Differential Equations
ODE Solutions: Advanced Issues
Existence and Uniqueness Theory
6,
Contents V
First Order Ordinary Differential Equations
Second Order Linear Homogeneous ODEs
Second Order Linear Non-Homogeneous ODEs
Higher Order Linear ODEs
Laplace Transforms
ODE Systems
Stability of Dynamic Systems
Series Solutions and Special Functions
7,
Contents VI
Sturm-Liouville Theory
Fourier Series and Integrals
Fourier Transforms
Minimax Approximation*
Partial Differential Equations
Analytic Functions
Integrals in the Complex Plane
Singularities of Complex Functions
8,
Contents VII
Variational Calculus*
Epilogue
Selected References
9,
Outline
Preliminary Background
Theme of the Course
Course Contents
Sources for More Detailed Study
Logistic Strategy
Expected Background
Preliminary Background
Theme of the Course
Course Contents
Sources for More Detailed Study
Logistic Strategy
Expected Background
10,
Preliminary Background
Theme of the Course
Course Contents
Sources for More Detailed Study
Logistic Strategy
Expected Background
11,
Course Contents
Numerical methods
Differential equations + +
Complex analysis
Preliminary Background
Theme of the Course
Course Contents
Sources for More Detailed Study
Logistic Strategy
Expected Background
12,
Preliminary Background
Theme of the Course
Course Contents
Sources for More Detailed Study
Logistic Strategy
Expected Background
If you have the time, need and interest, then you may consult
13,
Logistic Strategy
Preliminary Background
Theme of the Course
Course Contents
Sources for More Detailed Study
Logistic Strategy
Expected Background
14,
Logistic Strategy
Preliminary Background
Theme of the Course
Course Contents
Sources for More Detailed Study
Logistic Strategy
Expected Background
15,
Logistic Strategy
Preliminary Background
Theme of the Course
Course Contents
Sources for More Detailed Study
Logistic Strategy
Expected Background
16,
Logistic Strategy
Preliminary Background
Theme of the Course
Course Contents
Sources for More Detailed Study
Logistic Strategy
Expected Background
17,
Preliminary Background
Logistic Strategy
Tutorial Plan
Chapter
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
18,
Expected Background
Preliminary Background
Theme of the Course
Course Contents
Sources for More Detailed Study
Logistic Strategy
Expected Background
19,
Expected Background
Preliminary Background
Theme of the Course
Course Contents
Sources for More Detailed Study
Logistic Strategy
Expected Background
20,
Points to note
Preliminary Background
Theme of the Course
Course Contents
Sources for More Detailed Study
Logistic Strategy
Expected Background
21,
Outline
22,
Matrices
Question: What is a matrix?
23,
Matrices
24,
Matrices
25,
Matrices
26,
Matrices
or
Ax = y
27,
Matrices
Consider these definitions:
y = f (x)
y = f (x) = f (x1 , x2 , , xn )
yk = fk (x) = fk (x1 , x2 , , xn ),
y = f(x)
k = 1, 2, , m
28,
Matrices
Consider these definitions:
y = f (x)
y = f (x) = f (x1 , x2 , , xn )
yk = fk (x) = fk (x1 , x2 , , xn ),
y = f(x)
y = Ax
k = 1, 2, , m
29,
Matrices
30,
Matrices
31,
Matrices
Matrices
Geometry and Algebra
Linear Transformations
Matrix Terminology
32,
33,
Matrices
Geometry and Algebra
Linear Transformations
Matrix Terminology
X3
Y2
x
11
00
00
11
O
Y1
00000
11111
11y
00
11111
00000
00000
11111
00000
11111
00000
11111
00000
11111
00000
11111
O
X2
X1
Domain
Codomain
34,
35,
36,
37,
Linear Transformations
38,
Linear Transformations
39,
Linear Transformations
40,
Matrix Terminology
Matrix product
Transpose
Conjugate transpose
41,
Points to note
42,
Outline
43,
Consider A R mn as a mapping
A : R n R m,
Ax = y,
x R n,
y R m.
44,
Consider A R mn as a mapping
A : R n R m,
Ax = y,
x R n,
y R m.
Observations
1. Every x R n has an image y R m , but every y R m need
not have a pre-image in R n .
Range (or range space) as subset/subspace of
co-domain: containing images of all x R n .
45,
Consider A R mn as a mapping
A : R n R m,
Ax = y,
x R n,
y R m.
Observations
1. Every x R n has an image y R m , but every y R m need
not have a pre-image in R n .
Range (or range space) as subset/subspace of
co-domain: containing images of all x R n .
2. Image of x R n in R m is unique, but pre-image of y R m
need not be.
It may be non-existent, unique or infinitely many.
Null space as subset/subspace of domain:
containing pre-images of only 0 R m .
46,
Elementary Transformations
0
O
Range ( A )
Null ( A)
Domain
Codomain
47,
Elementary Transformations
0
O
Range ( A )
Null ( A)
Domain
Codomain
48,
Elementary Transformations
0
O
Range ( A )
Null ( A)
Domain
Codomain
k1 = k2 = = kr = 0.
49,
Elementary Transformations
0
O
Range ( A )
Null ( A)
Domain
Codomain
k1 = k2 = = kr = 0.
Range(A) = {y : y = Ax, x R n }
Null(A) = {x : x R n , Ax = 0}
50,
Basis
51,
Basis
52,
Basis
53,
Basis
Orthogonal basis: {v1 , v2 , , vn } with
Orthonormal basis:
vjT vk
vjT vk = 0 j 6= k.
= jk =
0
1
if
if
j 6= k
j =k
54,
Basis
Orthogonal basis: {v1 , v2 , , vn } with
Orthonormal basis:
vjT vk
vjT vk = 0 j 6= k.
= jk =
0
1
if
if
j 6= k
j =k
55,
Basis
vjT vk = 0 j 6= k.
= jk =
0
1
if
if
j 6= k
j =k
e1 =
det V = +1 or 1,
1
0
0
..
.
0
e2 =
0
1
0
..
.
0
en =
0
0
0
..
.
1
56,
Change of Basis
57,
Change of Basis
x1
x2
= [c1 c2 cn ] .. .
.
xn
58,
Change of Basis
x1
x2
= [c1 c2 cn ] .. .
.
xn
With C = [c1
c2
cn ],
59,
Change of Basis
x1
x2
= [c1 c2 cn ] .. .
.
xn
With C = [c1
c2
cn ],
60,
Change of Basis
61,
Change of Basis
A : R n R m,
Ax = y.
62,
Change of Basis
A : R n R m,
Ax = y.
and
Q
y = y.
x = y, with
Then, Ax = y A
= Q1 AP
A
63,
Change of Basis
Ax = y.
and
Q
y = y.
x = y, with
Then, Ax = y A
= Q1 AP
A
Special case: m = n and P = Q gives a similarity transformation
= P1 AP
A
64,
Elementary Transformations
65,
Elementary Transformations
66,
Elementary Transformations
67,
Elementary Transformations
Ir
0
0
0
68,
Points to note
69,
Outline
70,
Nature of Solutions
Ax = b
Nature of Solutions
Basic Idea of Solution Methodology
Homogeneous Systems
Pivoting
Partitioning and Block Operations
71,
Nature of Solutions
Ax = b
Nature of Solutions
Basic Idea of Solution Methodology
Homogeneous Systems
Pivoting
Partitioning and Block Operations
72,
Nature of Solutions
Ax = b
Nature of Solutions
Basic Idea of Solution Methodology
Homogeneous Systems
Pivoting
Partitioning and Block Operations
has a solution
b Range(A)
73,
Nature of Solutions
Ax = b
Nature of Solutions
Basic Idea of Solution Methodology
Homogeneous Systems
Pivoting
Partitioning and Block Operations
has a solution
b Range(A)
Uniqueness of solutions:
Rank(A)
Rank([A | b]) = n
Solution of Ax = b is unique.
74,
Nature of Solutions
Nature of Solutions
Basic Idea of Solution Methodology
Homogeneous Systems
Pivoting
Partitioning and Block Operations
Ax = b
has a solution
b Range(A)
Uniqueness of solutions:
Rank(A)
Rank([A | b]) = n
Solution of Ax = b is unique.
with A
x=b
and
xN Null (A)
75,
infinite solutions;
76,
Nature of Solutions
Basic Idea of Solution Methodology
Homogeneous Systems
Pivoting
Partitioning and Block Operations
infinite solutions;
[RA]x = Rb;
77,
Nature of Solutions
Basic Idea of Solution Methodology
Homogeneous Systems
Pivoting
Partitioning and Block Operations
infinite solutions;
[RA]x = Rb;
78,
Homogeneous Systems
to the A,
the row-reduced echelon form or RREF.
79,
Homogeneous Systems
to the A,
the row-reduced echelon form or RREF.
Features of RREF:
1. The first non-zero entry in any row is a 1, the leading 1.
2. In the same column as the leading 1, other entries are zero.
3. Non-zero entries in a lower row appear later.
80,
Homogeneous Systems
to the A,
the row-reduced echelon form or RREF.
Features of RREF:
1. The first non-zero entry in any row is a 1, the leading 1.
2. In the same column as the leading 1, other entries are zero.
3. Non-zero entries in a lower row appear later.
Variables corresponding to columns having leading 1s
are expressed in terms of the remaining variables.
81,
Homogeneous Systems
to the A,
the row-reduced echelon form or RREF.
Features of RREF:
1. The first non-zero entry in any row is a 1, the leading 1.
2. In the same column as the leading 1, other entries are zero.
3. Non-zero entries in a lower row appear later.
Variables corresponding to columns having leading 1s
are expressed in terms of the remaining variables.
u1
u2
Solution of Ax = 0: x = z1 z2 znk
unk
Basis of Null(A): {z1 , z2 , , znk }
82,
Pivoting
Attempt:
To get 1 at diagonal (or leading) position, with 0 elsewhere.
Key step: division by the diagonal (or leading) entry.
83,
Pivoting
Attempt:
To get 1 at diagonal (or leading) position, with 0 elsewhere.
Key step: division by the diagonal (or leading) entry.
Consider
Ik
. . .
.
.
.
. .
.
.
.
. . . BIG .
.
A=
.
big
.
.
.
.
.
. . .
.
.
.
. . .
.
.
Cannot divide by zero. Should not divide by .
84,
Pivoting
Attempt:
To get 1 at diagonal (or leading) position, with 0 elsewhere.
Key step: division by the diagonal (or leading) entry.
Consider
Ik
. . .
.
.
.
. .
.
.
.
. . . BIG .
.
A=
.
big
.
.
.
.
.
. . .
.
.
.
. . .
.
.
Cannot divide by zero. Should not divide by .
85,
Pivoting
Attempt:
To get 1 at diagonal (or leading) position, with 0 elsewhere.
Key step: division by the diagonal (or leading) entry.
Consider
Ik
. . .
.
.
.
. .
.
.
.
. . . BIG .
.
A=
.
big
.
.
.
.
.
. . .
.
.
.
. . .
.
.
Cannot divide by zero. Should not divide by .
Complete pivoting does not give a huge advantage over partial pivoting,
but requires maintaining of variable permutation for later unscrambling.
86,
x1
A11 A12 A13
y1
x2 =
,
A21 A22 A23
y2
x3
87,
Points to note
88,
Outline
89,
Gauss-Jordan Elimination
90,
Gauss-Jordan Elimination
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
LU Decomposition
Assemble C = [A b1 b2
and follow the algorithm .
b3
In
B] R n(2n+3+p)
91,
Gauss-Jordan Elimination
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
LU Decomposition
Assemble C = [A b1 b2
and follow the algorithm .
b3
In
B] R n(2n+3+p)
C C = [In
A1 b1
A1 b2
A1 b3
A1
A1 B].
92,
Gauss-Jordan Elimination
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
LU Decomposition
Assemble C = [A b1 b2
and follow the algorithm .
b3
In
B] R n(2n+3+p)
C C = [In
A1 b1
A1 b2
A1 b3
A1
A1 B].
Remarks:
93,
Gauss-Jordan Elimination
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
LU Decomposition
Gauss-Jordan Algorithm
=1
For k = 1, 2, 3, , (n 1)
cnn
If cnn = 0, then exit.
Else, divide row n by cnn .
In case of non-singular A,
default termination .
94,
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
Gaussian Elimination with Back-Substitution
LU Decomposition
Gaussian elimination:
Ax = b
a11
a12
a
22
or,
..
Ax = b
b1
x1
a1n
x
a2n
b2
2
.. .. = ..
.
. .
xn
bn
ann
95,
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
Gaussian Elimination with Back-Substitution
LU Decomposition
Gaussian elimination:
Ax = b
a11
a12
a
22
or,
..
Back-substitutions:
Ax = b
b1
x1
a1n
x
a2n
b2
2
.. .. = ..
.
. .
xn
bn
ann
xn = bn /ann
,
n
X
1
xi =
aij xj for i = n 1, n 2, , 2, 1
bi
aii
j=i +1
96,
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
Gaussian Elimination with Back-Substitution
LU Decomposition
Gaussian elimination:
Ax = b
a11
a12
a
22
or,
..
Back-substitutions:
Ax = b
b1
x1
a1n
x
a2n
b2
2
.. .. = ..
.
. .
xn
bn
ann
xn = bn /ann
,
n
X
1
xi =
aij xj for i = n 1, n 2, , 2, 1
bi
aii
j=i +1
Remarks
Computational cost half compared to G-J elimination.
Like G-J elimination, prior knowledge of RHS needed.
97,
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
Gaussian Elimination with Back-Substitution
LU Decomposition
98,
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
Gaussian Elimination with Back-Substitution
LU Decomposition
Rk |k=1
1
aa21
11
aa31
11
..
.
aan1
11
ajk
akk
0 0
1 0
0 1
.. .. . .
.
. .
0 0
k-th row
0
0
0
..
.
1
etc.
for
99,
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
Gaussian Elimination with Back-Substitution
LU Decomposition
Rk |k=1
With L = R1 ,
1
aa21
11
aa31
11
..
.
aan1
11
A = LU.
ajk
akk
0 0
1 0
0 1
.. .. . .
.
. .
0 0
k-th row
0
0
0
..
.
1
etc.
for
100,
LU Decomposition
101,
LU Decomposition
102,
LU Decomposition
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
LU Decomposition
and
Ux = y.
103,
LU Decomposition
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
LU Decomposition
Forward substitutions:
i 1
X
1
lij yj
yi = bi
lii
j=1
Back-substitutions:
n
X
1
uij xj
yi
xi =
uii
j=i +1
and
Ux = y.
for i = 1, 2, 3, , n;
for i = n, n 1, n 2, , 1.
104,
LU Decomposition
105,
LU Decomposition
106,
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
LU Decomposition
L=
l11
l21
l31
..
.
0
0
l22
l32
..
.
l33
..
.
ln1
ln2
ln3
..
.
0
0
0
..
.
lnn
and U =
u11
0
0
..
.
u12
u22
0
..
.
u13
u23
u33
..
.
..
.
u1n
u2n
u3n
..
.
unn
LU Decomposition
107,
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
LU Decomposition
L=
l11
l21
l31
..
.
0
0
l22
l32
..
.
l33
..
.
ln1
ln2
ln3
..
.
0
0
0
..
.
lnn
and U =
u11
0
0
..
.
u12
u22
0
..
.
u13
u23
u33
..
.
..
.
u1n
u2n
u3n
..
.
unn
and
k=1
j
X
for i j,
for i > j.
k=1
LU Decomposition
Doolittles algorithm
Choose lii = 1
For j = 1, 2, 3, , n
Pi 1
1. uij = aij k=1
lik ukj for 1 i j
Pj1
1
2. lij = ujj (aij k=1 lik ukj ) for i > j
108,
LU Decomposition
Doolittles algorithm
Choose lii = 1
For j = 1, 2, 3, , n
Pi 1
1. uij = aij k=1
lik ukj for 1 i j
Pj1
1
2. lij = ujj (aij k=1 lik ukj ) for i > j
u1n
u2n
u3n
..
..
.
.
unn
109,
LU Decomposition
110,
LU Decomposition
0 1 2
1
0 0
u11 = 0 u12 u13
3 1 2 = l21 =? 1 0
0
u22 u23 .
2 1 3
l31
l32 1
0
0 u33
111,
LU Decomposition
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
LU Decomposition
0 1 2
1
0 0
u11 = 0 u12 u13
3 1 2 = l21 =? 1 0
0
u22 u23 .
2 1 3
l31
l32 1
0
0 u33
LU-decompose a permutation of its rows
0
3
2
1 2
1 2
1 3
0
= 1
0
0
= 1
0
1 0
0 0
0 1
1 0
0 0
0 1
1 2
1 2
1 3
1 0 0
3 1
0 1 0 0 1
1
2
0 0
1
3
3
3
0
2
2
2 .
1
112,
LU Decomposition
Gauss-Jordan Elimination
Gaussian Elimination with Back-Substitution
LU Decomposition
0 1 2
1
0 0
u11 = 0 u12 u13
3 1 2 = l21 =? 1 0
0
u22 u23 .
2 1 3
l31
l32 1
0
0 u33
LU-decompose a permutation of its rows
0
3
2
1 2
1 2
1 3
0
= 1
0
0
= 1
0
1 0
0 0
0 1
1 0
0 0
0 1
1 2
1 2
1 3
1 0 0
3 1
0 1 0 0 1
1
2
0 0
1
3
3
3
0
2
2
2 .
1
113,
Points to note
114,
Outline
115,
116,
Quadratic form
T
q(x) = x Ax =
n
n X
X
i =1 j=1
aij xi xj
117,
Quadratic form
T
q(x) = x Ax =
n
n X
X
aij xi xj
i =1 j=1
118,
Quadratic form
T
q(x) = x Ax =
n
n X
X
aij xi xj
i =1 j=1
x 6= 0
119,
Quadratic form
T
q(x) = x Ax =
n
n X
X
aij xi xj
i =1 j=1
x 6= 0
x 6= 0.
120,
Quadratic form
T
q(x) = x Ax =
n
n X
X
aij xi xj
i =1 j=1
x 6= 0
x 6= 0.
Sylvesters criteria:
a11
a
a
0, 11 12
a21 a22
0, , det A 0;
Cholesky Decomposition
121,
Cholesky Decomposition
122,
for i < j n
Cholesky Decomposition
123,
Forward substitutions: Ly = b
Back-substitutions: LT x = y
for i < j n
Cholesky Decomposition
124,
for i < j n
For solving Ax = b,
Forward substitutions: Ly = b
Back-substitutions: LT x = y
Remarks
Sparse Systems*
Sherman-Morrison formula
(A + uvT )1 = A1
(A1 u)(vT A1 )
1 + vT A1 u
Woodbury formula
125,
Points to note
126,
Outline
127,
128,
The p-norm
1
129,
The p-norm
1
130,
The p-norm
1
Weighted norm
kxkw =
q
xT Wx
131,
132,
kAxk
= max kAxk
kxk
kxk=1
133,
kAxk
= max kAxk
kxk
kxk=1
134,
kAxk
= max kAxk
kxk
kxk=1
1 (A)
135,
kAxk
= max kAxk
kxk
kxk=1
1 (A)
136,
0.9999x1 1.0001x2 = 1
x1
x2 = 1 +
Solution: x1 =
10001+1
,
2
x2 =
99991
2
137,
0.9999x1 1.0001x2 = 1
x1
x2 = 1 +
Solution: x1 =
10001+1
,
2
x2 =
99991
2
See illustration
138,
0.9999x1 1.0001x2 = 1
x1
x2 = 1 +
Solution: x1 =
10001+1
,
2
x2 =
99991
2
See illustration
139,
0.9999x1 1.0001x2 = 1
x1
x2 = 1 +
Solution: x1 =
10001+1
,
2
x2 =
99991
2
See illustration
140,
0.9999x1 1.0001x2 = 1
x1
x2 = 1 +
Solution: x1 =
10001+1
,
2
x2 =
99991
2
See illustration
kAk
kAk
kxk
kAk kA1 k
= (A)
kxk
kAk
kAk
141,
(2)
(1)
o
X
X2
(2)
(2)
(1)
(c)
X1
(a)
X2
(b)
X1
(a)
(2b)
(1)
(2d)
(1)
X1
(d) Singularity
X1
142,
Rectangular Systems
143,
Rectangular Systems
1
1
kAx bk2 = (Ax b)T (Ax b)
2
2
1 T T
1
x A Ax xT AT b + bT b
2
2
144,
Rectangular Systems
1
1
kAx bk2 = (Ax b)T (Ax b)
2
2
1 T T
1
x A Ax xT AT b + bT b
2
2
145,
Rectangular Systems
146,
Rectangular Systems
Solution
x = AT = AT (AAT )1 b
147,
Rectangular Systems
Solution
x = AT = AT (AAT )1 b
Consider the problem
minimize U(x) = 12 xT x
subject to Ax = b.
148,
Rectangular Systems
Solution
x = AT = AT (AAT )1 b
Consider the problem
minimize U(x) = 12 xT x
subject to Ax = b.
149,
Singularity-Robust Solutions
150,
Singularity-Robust Solutions
151,
Singularity-Robust Solutions
152,
Singularity-Robust Solutions
153,
Singularity-Robust Solutions
The choice of ?
154,
Singularity-Robust Solutions
The choice of ?
x = AT
155,
Iterative Methods
n
X
1
(k)
(k+1)
aij xj for i = 1, 2, 3, , n.
bi
=
xi
aii
j=1, j6=i
156,
Iterative Methods
157,
n
X
1
(k)
(k+1)
aij xj for i = 1, 2, 3, , n.
bi
=
xi
aii
j=1, j6=i
Gauss-Seidel method:
n
i 1
X
X
1
(k+1)
(k)
(k+1)
=
xi
aij xj for i = 1, 2, 3, , n.
aij xj
bi
aii
j=1
j=i +1
Iterative Methods
158,
n
X
1
(k)
(k+1)
aij xj for i = 1, 2, 3, , n.
bi
=
xi
aii
j=1, j6=i
Gauss-Seidel method:
n
i 1
X
X
1
(k+1)
(k)
(k+1)
=
xi
aij xj for i = 1, 2, 3, , n.
aij xj
bi
aii
j=1
j=i +1
Points to note
159,
Outline
160,
Eigenvalue Problem
161,
Eigenvalue Problem
Eigenvalue Problem
Generalized Eigenvalue Problem
Some Basic Theoretical Results
Power Method
nn
162,
Eigenvalue Problem
Eigenvalue Problem
Generalized Eigenvalue Problem
Some Basic Theoretical Results
Power Method
nn
163,
Eigenvalue Problem
Eigenvalue Problem
Generalized Eigenvalue Problem
Some Basic Theoretical Results
Power Method
nn
164,
Eigenvalue Problem
Eigenvalue Problem
Generalized Eigenvalue Problem
Some Basic Theoretical Results
Power Method
nn
165,
Eigenvalue Problem
Eigenvalue Problem
Generalized Eigenvalue Problem
Some Basic Theoretical Results
Power Method
nn
166,
k
m
167,
Eigenvalue Problem
Generalized Eigenvalue Problem
Some Basic Theoretical Results
Power Method
k
m
M
x + Kx = 0,
Natural frequencies and corresponding modes?
168,
Eigenvalue Problem
Generalized Eigenvalue Problem
Some Basic Theoretical Results
Power Method
k
m
M
x + Kx = 0,
Natural frequencies and corresponding modes?
Assuming a vibration mode x = sin(t + ),
( 2 M + K) sin(t + ) = 0
K = 2 M
169,
Eigenvalue Problem
Generalized Eigenvalue Problem
Some Basic Theoretical Results
Power Method
k
m
M
x + Kx = 0,
Natural frequencies and corresponding modes?
Assuming a vibration mode x = sin(t + ),
( 2 M + K) sin(t + ) = 0 K = 2 M
Reduce as M1 K = 2 ? Why is it not a good idea?
170,
Eigenvalue Problem
Generalized Eigenvalue Problem
Some Basic Theoretical Results
Power Method
k
m
M
x + Kx = 0,
Natural frequencies and corresponding modes?
Assuming a vibration mode x = sin(t + ),
( 2 M + K) sin(t + ) = 0 K = 2 M
Reduce as M1 K = 2 ? Why is it not a good idea?
K symmetric, M symmetric and positive definite!!
K = 2
171,
Eigenvalues of transpose
Eigenvalues of AT are the same as those of A.
172,
Eigenvalues of transpose
Eigenvalues of AT are the same as those of A.
173,
Eigenvalues of transpose
Eigenvalues of AT are the same as those of A.
A1 0
0
0
0
0
A v2 = 0 A2 0 v2 = A2 v2 = 2 v2
0
0 A3
0
0
0
174,
175,
176,
Eigenvalue Problem
Generalized Eigenvalue Problem
Some Basic Theoretical Results
Power Method
A B
0 C
A R r r and C R ss
177,
Eigenvalue Problem
Generalized Eigenvalue Problem
Some Basic Theoretical Results
Power Method
A B
0 C
A R r r and C R ss
If Av = v, then
v
A B
v
Av
v
v
H
=
=
=
=
0
0 C
0
0
0
0
178,
Eigenvalue Problem
Generalized Eigenvalue Problem
Some Basic Theoretical Results
Power Method
A B
0 C
A R r r and C R ss
If Av = v, then
v
A B
v
Av
v
v
H
=
=
=
=
0
0 C
0
0
0
0
If is an eigenvalue of C, then it is also an eigenvalue of CT and
T
0
A
0
0
0
CT w = w HT
=
=
w
B T CT
w
w
179,
Shift theorem
Eigenvectors of A + I are the same as those of A.
Eigenvalues: shifted by .
180,
Eigenvalue Problem
Generalized Eigenvalue Problem
Some Basic Theoretical Results
Power Method
Shift theorem
Eigenvectors of A + I are the same as those of A.
Eigenvalues: shifted by .
Deflation
For a symmetric matrix A, with mutually orthogonal eigenvectors,
having (j , vj ) as an eigenpair,
B = A j
vj vjT
vjT vj
181,
Eigenspace
If v1 , v2 , , vk are eigenvectors of A corresponding to the same
eigenvalue , then
eigenspace: < v1 , v2 , , vk >
182,
Eigenspace
If v1 , v2 , , vk are eigenvectors of A corresponding to the same
eigenvalue , then
eigenspace: < v1 , v2 , , vk >
Similarity transformation
B = S1 AS: same transformation expressed in new basis.
183,
Eigenspace
If v1 , v2 , , vk are eigenvectors of A corresponding to the same
eigenvalue , then
eigenspace: < v1 , v2 , , vk >
Similarity transformation
B = S1 AS: same transformation expressed in new basis.
det(I A) = det S1 det(I A) det S = det(I B)
Same characteristic polynomial!
184,
Eigenspace
If v1 , v2 , , vk are eigenvectors of A corresponding to the same
eigenvalue , then
eigenspace: < v1 , v2 , , vk >
Similarity transformation
B = S1 AS: same transformation expressed in new basis.
det(I A) = det S1 det(I A) det S = det(I B)
Same characteristic polynomial!
Eigenvalues are the property of a linear transformation,
not of the basis.
An eigenvector v of A transforms to S1 v, as the corresponding
eigenvector of B.
185,
Power Method
Consider matrix A with
186,
Power Method
Consider matrix A with
For vector x = 1 v1 + 2 v2 + + n vn ,
p
p
p
3
n
2
p
p
2 v2 +
3 v3 + +
n vn
A x = 1 1 v1 +
1
1
1
187,
Power Method
Consider matrix A with
For vector x = 1 v1 + 2 v2 + + n vn ,
p
p
p
3
n
2
p
p
2 v2 +
3 v3 + +
n vn
A x = 1 1 v1 +
1
1
1
As p , Ap x p1 1 v1 , and
188,
Power Method
Consider matrix A with
For vector x = 1 v1 + 2 v2 + + n vn ,
p
p
p
3
n
2
p
p
2 v2 +
3 v3 + +
n vn
A x = 1 1 v1 +
1
1
1
As p , Ap x p1 1 v1 , and
(Ap x)r
,
p (Ap1 x)r
1 = lim
r = 1, 2, 3, , n.
189,
Power Method
Eigenvalue Problem
Generalized Eigenvalue Problem
Some Basic Theoretical Results
Power Method
For vector x = 1 v1 + 2 v2 + + n vn ,
p
p
p
3
n
2
p
p
2 v2 +
3 v3 + +
n vn
A x = 1 1 v1 +
1
1
1
As p , Ap x p1 1 v1 , and
(Ap x)r
,
p (Ap1 x)r
1 = lim
r = 1, 2, 3, , n.
190,
Power Method
Eigenvalue Problem
Generalized Eigenvalue Problem
Some Basic Theoretical Results
Power Method
For vector x = 1 v1 + 2 v2 + + n vn ,
p
p
p
3
n
2
p
p
2 v2 +
3 v3 + +
n vn
A x = 1 1 v1 +
1
1
1
As p , Ap x p1 1 v1 , and
(Ap x)r
,
p (Ap1 x)r
1 = lim
r = 1, 2, 3, , n.
191,
Points to note
192,
Outline
193,
Diagonalizability
194,
Diagonalizability
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
AS = A[v1
= [v1
v2
v2
vn ] = [1 v1 2 v2 n vn ]
1 0 0
0 2 0
vn ] ..
.. . .
.. = S
.
. .
.
0 0 n
A = SS1
and
S1 AS =
195,
Diagonalizability
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
AS = A[v1
= [v1
v2
v2
vn ] = [1 v1 2 v2 n vn ]
1 0 0
0 2 0
vn ] ..
.. . .
.. = S
.
. .
.
0 0 n
A = SS1
and
S1 AS =
196,
Diagonalizability
Diagonalizability:
A matrix having a complete set of n linearly independent
eigenvectors is diagonalizable.
197,
Diagonalizability
Diagonalizability:
A matrix having a complete set of n linearly independent
eigenvectors is diagonalizable.
Existence of a complete set of eigenvectors:
A diagonalizable matrix possesses a complete set of n
linearly independent eigenvectors.
198,
Diagonalizability
Diagonalizability:
A matrix having a complete set of n linearly independent
eigenvectors is diagonalizable.
Existence of a complete set of eigenvectors:
A diagonalizable matrix possesses a complete set of n
linearly independent eigenvectors.
199,
Canonical Forms
200,
Canonical Forms
201,
Canonical Forms
J1
J2
J=
..
, Jr =
.
..
. 1
Jk
202,
Canonical Forms
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
J1
J2
J=
..
, Jr =
.
..
. 1
Jk
..
.
Jr
A[ Sr ] = [ Sr ]
..
w2
w3
203,
Canonical Forms
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
[Av
Aw2
Aw3
] = [v
w2
w3
..
]
.
..
.
204,
Canonical Forms
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
[Av
Aw2
Aw3
] = [v
w2
w3
..
]
.
..
.
Av = v, Aw2 = v + w2 , Aw3 = w2 + w3 ,
205,
Canonical Forms
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
[Av
Aw2
Aw3
] = [v
w2
w3
..
]
.
..
.
Av = v, Aw2 = v + w2 , Aw3 = w2 + w3 ,
Generalized eigenvectors w2 , w3 etc:
(A I)v = 0,
(A I)w2 = v
(A I)w3 = w2
and
and
(A I)2 w2 = 0,
(A I)3 w3 = 0,
206,
Canonical Forms
Diagonal form
Matrix is diagonalizable
207,
Canonical Forms
Triangular form
Triangularization: Change of basis of a linear tranformation so as
to get its matrix in the triangular form
208,
Canonical Forms
Triangular form
Triangularization: Change of basis of a linear tranformation so as
to get its matrix in the triangular form
Determination of eigenvalues
209,
Canonical Forms
Triangular form
Triangularization: Change of basis of a linear tranformation so as
to get its matrix in the triangular form
Determination of eigenvalues
0
i
210,
Canonical Forms
211,
Canonical Forms
.
.
.
.
Hu =
.
.
.
.
.
. . .
..
.
.
.
.
.
. .
212,
Canonical Forms
.
.
.
.
Hu =
.
.
.
.
.
. . .
..
.
.
.
.
.
. .
Note: Tridiagonal and Hessenberg forms do not fall in the
category of canonical forms.
213,
Symmetric Matrices
214,
Symmetric Matrices
215,
Symmetric Matrices
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
216,
Symmetric Matrices
217,
Symmetric Matrices
218,
Symmetric Matrices
Further, A = VVT .
Similar results for complex Hermitian matrices.
219,
Symmetric Matrices
220,
Symmetric Matrices
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
k = 0 and = h
221,
Symmetric Matrices
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
k = 0 and = h
222,
Symmetric Matrices
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
k = 0 and = h
223,
Symmetric Matrices
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
224,
Symmetric Matrices
225,
Symmetric Matrices
226,
Symmetric Matrices
(Av)T w vT w = kvk2
kvk2 = 0
which is absurd.
All Jordan blocks will be of 1 1 size.
227,
Symmetric Matrices
(Av)T w vT w = kvk2
kvk2 = 0
which is absurd.
An eigenvector will not admit a generalized eigenvector.
All Jordan blocks will be of 1 1 size.
228,
Symmetric Matrices
229,
Symmetric Matrices
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
v1T v2 = 0
230,
Symmetric Matrices
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
231,
Symmetric Matrices
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
232,
Symmetric Matrices
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
233,
Symmetric Matrices
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
234,
Symmetric Matrices
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
= [v1
v2
vn ]
1
2
..
v1T
v2T
..
.
vnT
n
n
X
i vi viT
= 1 v1 v1T + 2 v2 v2T + + n vn vnT =
i =1
235,
Symmetric Matrices
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
= [v1
v2
vn ]
1
2
..
v1T
v2T
..
.
vnT
n
n
X
i vi viT
= 1 v1 v1T + 2 v2 v2T + + n vn vnT =
i =1
236,
Symmetric Matrices
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
= [v1
v2
vn ]
1
2
..
v1T
v2T
..
.
vnT
n
n
X
i vi viT
= 1 v1 v1T + 2 v2 v2T + + n vn vnT =
i =1
237,
Similarity Transformations
General
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
Hessenberg
Symmetric
Tridiagonal
Triangular
Symmetric Tridiagonal
Diagonal
238,
Similarity Transformations
General
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
Hessenberg
Symmetric
Tridiagonal
Triangular
Symmetric Tridiagonal
Diagonal
239,
Similarity Transformations
General
Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations
Hessenberg
Symmetric
Tridiagonal
Triangular
Symmetric Tridiagonal
Diagonal
240,
Points to note
241,
Outline
242,
Plane Rotations
Plane Rotations
Jacobi Rotation Method
Givens Rotation Method
Y/
P (x, y)
y
L
O
x
X/
243,
Plane Rotations
Plane Rotations
Jacobi Rotation Method
Givens Rotation Method
Y/
P (x, y)
y
L
O
x
X/
= OL + LM = OL + KN = x cos + y sin
= PN MN = PN LK = y cos x sin
244,
Plane Rotations
245,
Plane Rotations
246,
Plane Rotations
cos sin 0
cos 0 sin
xy = sin cos 0 , xz =
0 1
0 etc.
0
0 1
sin 0 cos
247,
Plane Rotations
1
0
0
1
0
0
..
..
.
.
.
.
.
1 0
0
0 0 0 c 0 0 s
0 1
0
Ppq =
.
..
.
.
.
.
.
.
0
1 0
0 0 0 s 0 0 c
..
.. . .
.
.
.
0
0
248,
Plane Rotations
1
0
0
1
0
0
..
..
.
.
.
.
.
1 0
0
0 0 0 c 0 0 s
0 1
0
Ppq =
.
..
.
.
.
.
.
.
0
1 0
0 0 0 s 0 0 c
..
.. . .
.
.
.
0
0
Matrix A is transformed as
T
A = P1
pq APpq = Ppq APpq ,
only the p-th and q-th rows and columns being affected.
249,
apr
= arp
= carp sarq for p 6= r 6= q,
aqr
= arq
= carq + sarp for p 6= r 6= q,
app
= c 2 app + s 2 aqq 2scapq ,
aqq
= s 2 app + c 2 aqq + 2scapq , and
apq
= aqp
= (c 2 s 2 )apq + sc(app aqq )
250,
apr
= arp
= carp sarq for p 6= r 6= q,
aqr
= arq
= carq + sarp for p 6= r 6= q,
app
= c 2 app + s 2 aqq 2scapq ,
aqq
= s 2 app + c 2 aqq + 2scapq , and
apq
= aqp
= (c 2 s 2 )apq + sc(app aqq )
In a Jacobi rotation,
apq
=0
aqq app
c2 s2
=
= k (say).
2sc
2apq
251,
apr
= arp
= carp sarq for p 6= r 6= q,
aqr
= arq
= carq + sarp for p 6= r 6= q,
app
= c 2 app + s 2 aqq 2scapq ,
aqq
= s 2 app + c 2 aqq + 2scapq , and
apq
= aqp
= (c 2 s 2 )apq + sc(app aqq )
In a Jacobi rotation,
apq
=0
aqq app
c2 s2
=
= k (say).
2sc
2apq
252,
apr
= arp
= carp sarq for p 6= r 6= q,
aqr
= arq
= carq + sarp for p 6= r 6= q,
app
= c 2 app + s 2 aqq 2scapq ,
aqq
= s 2 app + c 2 aqq + 2scapq , and
apq
= aqp
= (c 2 s 2 )apq + sc(app aqq )
In a Jacobi rotation,
apq
=0
aqq app
c2 s2
=
= k (say).
2sc
2apq
253,
Plane Rotations
Jacobi Rotation Method
Givens Rotation Method
X
X
X
2
2
arp
+
arq
S =
|ars |2 = 2
r 6=p
r 6=s
= 2
p6=r 6=q
p6=r 6=q
2
2
2
(arp
+ arq
) + apq
254,
Plane Rotations
Jacobi Rotation Method
Givens Rotation Method
X
X
X
2
2
arp
+
arq
S =
|ars |2 = 2
r 6=p
r 6=s
= 2
p6=r 6=q
S = 2
= 2
p6=r 6=q
2
2
2
(arp
+ arq
) + apq
p6=r 6=q
p6=r 6=q
2
2
2
(arp
+ arq
) + apq
2
2
)
+ arq
(arp
255,
Plane Rotations
Jacobi Rotation Method
Givens Rotation Method
X
X
X
2
2
arp
+
arq
S =
|ars |2 = 2
r 6=p
r 6=s
= 2
p6=r 6=q
S = 2
= 2
p6=r 6=q
2
2
2
(arp
+ arq
) + apq
p6=r 6=q
2
2
2
(arp
+ arq
) + apq
2
2
)
+ arq
(arp
p6=r 6=q
differ by
2
S = S S = 2apq
0;
and S 0.
256,
= 0: tan = rq
While applying the rotation Ppq , demand arq
arp
257,
= 0: tan = rq
While applying the rotation Ppq , demand arq
arp
r = p 1: Givens rotation
258,
= 0: tan = rq
While applying the rotation Ppq , demand arq
arp
r = p 1: Givens rotation
259,
Plane Rotations
Jacobi Rotation Method
Givens Rotation Method
= 0: tan = rq
While applying the rotation Ppq , demand arq
arp
r = p 1: Givens rotation
260,
Plane Rotations
Jacobi Rotation Method
Givens Rotation Method
= 0: tan = rq
While applying the rotation Ppq , demand arq
arp
r = p 1: Givens rotation
261,
262,
263,
Points to note
264,
Outline
265,
266,
uv
w
Plane of
Reflection
O
v
uv
kuvk .
267,
uv
w
Plane of
Reflection
O
v
uv
kuvk .
268,
uv
w
Plane of
Reflection
O
v
uv
kuvk .
Hk = Ik 2wwT
is symmetric and orthogonal.
For any vector x orthogonal to w,
Hk x = (Ik 2wwT )x = x
and
Hk w = (Ik 2wwT )w = w.
269,
uv
w
Plane of
Reflection
O
v
uv
kuvk .
Hk = Ik 2wwT
is symmetric and orthogonal.
For any vector x orthogonal to w,
Hk x = (Ik 2wwT )x = x
and
Hk w = (Ik 2wwT )w = w.
Householder Method
270,
Householder Method
(1)
= P1 AP1 =
=
271,
a11 uT
u A1
T
v
.
Hn1 A1 Hn1
1
0
0 Hn1
a11
v
1
0
0 Hn1
Householder Method
(1)
= P1 AP1 =
=
272,
a11
v
A(1)
a11 uT
u A1
T
v
.
Hn1 A1 Hn1
1
0
0 Hn1
d1 e2 0
.
= e2 d2 uT
2
0 u2 A2
1
0
0 Hn1
Householder Method
Next, with v2 = ku2 ke1 , we form
I2
0
P2 =
0 Hn2
and operate as A(2) = P2 A(1) P2 .
273,
Householder Method
Next, with v2 = ku2 ke1 , we form
I2
0
P2 =
0 Hn2
274,
d1 e2
e2 d2 . . .
..
..
A(j) =
.
. ej+1
ej+1 dj+1 uT
j+1
uj+1 Aj+1
Householder Method
Next, with v2 = ku2 ke1 , we form
I2
0
P2 =
0 Hn2
d1 e2
e2 d2 . . .
..
..
A(j) =
.
. ej+1
ej+1 dj+1 uT
j+1
uj+1 Aj+1
By n 2 steps, with P = P1 P2 P3 Pn2 ,
A(n2) = PT AP
is symmetric tridiagonal.
275,
276,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
d1
e2
e2
T=
d2
..
.
..
..
en1
en1
dn1
en
en
dn
277,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
d1
e2
e2
T=
d2
..
.
..
..
en1
en1
dn1
en
en
dn
Characteristic polynomial
d1
e2
..
e2
.
d2
..
..
p() =
.
.
en1
e
dn1
en
n1
en
dn
278,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
p2 () = ( d2 )( d1 ) e22 ,
2
pk+1 () = ( dk+1 )pk () ek+1
pk1 ().
279,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
p2 () = ( d2 )( d1 ) e22 ,
2
pk+1 () = ( dk+1 )pk () ek+1
pk1 ().
a Sturmian sequence if ej 6= 0 j
280,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
p2 () = ( d2 )( d1 ) e22 ,
2
pk+1 () = ( dk+1 )pk () ek+1
pk1 ().
a Sturmian sequence if ej 6= 0 j
281,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
p2 () = ( d2 )( d1 ) e22 ,
2
pk+1 () = ( dk+1 )pk () ek+1
pk1 ().
a Sturmian sequence if ej 6= 0 j
282,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
procedure .
283,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
procedure .
284,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
procedure .
285,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
procedure .
286,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
287,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
i1
i
i+1
2
(a) Roots of p ( ) and p
i
j+1
i+1
()
j1
j+1
ve
ve
288,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
289,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
290,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
291,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
292,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
Refer figure.
293,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
294,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
295,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
a+b
2 .
296,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
a+b
2 .
297,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
a+b
2 .
298,
Reflection Transformation
Eigenvalues of Symmetric TridiagonalHouseholder
Matrices
Householder
Method
Eigenvalues of Symmetric Tridiagonal Matrices
Algorithm
Points to note
299,
Outline
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
300,
QR Decomposition
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
301,
QR Decomposition
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
r11 r1n
..
..
[a1 an ] = [q1 qn ]
.
.
rnn
302,
QR Decomposition Method
QR Decomposition
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
r11 r1n
..
..
[a1 an ] = [q1 qn ]
.
.
rnn
A simple method based on Gram-Schmidt
P orthogonalization:
Considering columnwise equality aj = ji =1 rij qi ,
for j = 1, 2, 3, , n;
rij
qj
qT
i aj
i < j,
aj /rjj ,
aj
= aj
j1
X
i =1
if rjj 6= 0;
any vector satisfying qT
i qj = ij for 1 i j, if rjj = 0.
303,
QR Decomposition
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
u0 v0
ku0 v0 k
304,
QR Decomposition
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
u0 v0
ku0 v0 k
With
ka1 k
Pn2 Pn3 P2 P1 P0 A = Pn2 Pn3 P2 P1
0
A0
r11
= Pn2 Pn3 P2
r22 = = R
A1
Q = (Pn2 Pn3 P2 P1 P0 )T = P0 P1 P2 Pn3 Pn2 ,
we have QT A = R A = QR.
305,
QR Decomposition
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
306,
QR Decomposition
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
307,
QR Iterations
Multiplying Q and R factors in reverse,
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
A = RQ = QT AQ,
an orthogonal similarity transformation.
1. If A is symmetric, then so is A .
2. If A is in upper Hessenberg form, then so is A .
3. If A is symmetric tridiagonal, then so is A .
308,
QR Iterations
Multiplying Q and R factors in reverse,
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
A = RQ = QT AQ,
an orthogonal similarity transformation.
1. If A is symmetric, then so is A .
2. If A is in upper Hessenberg form, then so is A .
3. If A is symmetric tridiagonal, then so is A .
Complexity of QR iteration: O(n) for a symmetric tridiagonal
matrix, O(n2 ) operation for an upper Hessenberg matrix and
O(n3 ) for the general case.
309,
QR Iterations
Multiplying Q and R factors in reverse,
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
A = RQ = QT AQ,
an orthogonal similarity transformation.
1. If A is symmetric, then so is A .
2. If A is in upper Hessenberg form, then so is A .
3. If A is symmetric tridiagonal, then so is A .
Complexity of QR iteration: O(n) for a symmetric tridiagonal
matrix, O(n2 ) operation for an upper Hessenberg matrix and
O(n3 ) for the general case.
Algorithm: Set A1 = A and for k = 1, 2, 3, ,
decompose Ak = Qk Rk ,
reassemble Ak+1 = Rk Qk .
310,
QR Decomposition Method
QR Iterations
Quasi-upper-triangular
..
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
form:
Bk
..
.
..
..
.
.
311,
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
312,
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
313,
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
Further,
Ak Range{e1 , e2 } = Range{q1 , q2 } Range{v1 , v2 }.
(1 2 )1
.
and (a2 )k QT
2
k Aq2 =
0
And, so on ...
314,
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
k
i
j
315,
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
k
k = Ak k I;
A
k = Qk Rk , A
k+1 = Rk Qk ;
A
k+1 + k I.
Ak+1 = A
i
j
316,
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
k
i
j
k = Ak k I;
A
k = Qk Rk , A
k+1 = Rk Qk ;
A
k+1 + k I.
Ak+1 = A
Resulting transformation is
Ak+1 = Rk Qk + k I = QT
k Ak Qk + k I
T
= QT
k (Ak k I)Qk + k I = Qk Ak Qk .
317,
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
k
i
j
k = Ak k I;
A
k = Qk Rk , A
k+1 = Rk Qk ;
A
k+1 + k I.
Ak+1 = A
Resulting transformation is
Ak+1 = Rk Qk + k I = QT
k Ak Qk + k I
T
= QT
k (Ak k I)Qk + k I = Qk Ak Qk .
i k
j k .
318,
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
k
i
j
k = Ak k I;
A
k = Qk Rk , A
k+1 = Rk Qk ;
A
k+1 + k I.
Ak+1 = A
Resulting transformation is
Ak+1 = Rk Qk + k I = QT
k Ak Qk + k I
T
= QT
k (Ak k I)Qk + k I = Qk Ak Qk .
i k
j k .
319,
Points to note
QR Decomposition Method
QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift*
320,
Outline
321,
Introductory Remarks
322,
Introductory Remarks
323,
Introductory Remarks
324,
325,
326,
Pivoting
Elimination
327,
P1
rs = Prs : interchange of r -th and s-th rows.
328,
P1
rs = Prs : interchange of r -th and s-th rows.
Elimination
Ir
Gr = 0
0
= G1
step: A
r AGr with elimination matrix
0
0
Ir 0
0
and G1
0 1
.
1
0
0
r =
k Inr 1
0 k Inr 1
G1
r : Row (r + 1 + i ) Row (r + 1 + i ) ki Row (r + 1)
for i = 1, 2, 3, , n r 1
Gr : Column (r + P
1) Column (r + 1)+
nr 1
[ki Column (r + 1 + i ) ]
i =1
329,
330,
Particular cases:
an1,n2
0: Separately find the eigenvalues n1 and n
an1,n1 an1,n
from
, continue with the leading
an,n1
an,n
(n 2) (n 2) sub-matrix.
331,
Particular cases:
an1,n2
0: Separately find the eigenvalues n1 and n
an1,n1 an1,n
from
, continue with the leading
an,n1
an,n
(n 2) (n 2) sub-matrix.
332,
Inverse Iteration
333,
Inverse Iteration
334,
Inverse Iteration
335,
Inverse Iteration
Introductory Remarks
Reduction to Hessenberg Form*
QR Algorithm on Hessenberg Matrices*
Inverse Iteration
Recommendation
1
y0T y
336,
Inverse Iteration
Introductory Remarks
Reduction to Hessenberg Form*
QR Algorithm on Hessenberg Matrices*
Inverse Iteration
Recommendation
1
y0T y
algorithm ?
337,
Inverse Iteration
With y0 =
j=1 j vj
n
X
j=1
Pn
and y =
j [A (i )0 I]vj
Pn
j=1 j vj ,
n
X
Introductory Remarks
Reduction to Hessenberg Form*
QR Algorithm on Hessenberg Matrices*
Inverse Iteration
Recommendation
[A (i )0 I]y = y0 gives
j vj
j=1
j [j (i )0 ] = j
j =
j
.
j (i )0
338,
Inverse Iteration
With y0 =
j=1 j vj
n
X
j=1
Pn
and y =
j [A (i )0 I]vj
Pn
j=1 j vj ,
n
X
Introductory Remarks
Reduction to Hessenberg Form*
QR Algorithm on Hessenberg Matrices*
Inverse Iteration
Recommendation
[A (i )0 I]y = y0 gives
j vj
j=1
j [j (i )0 ] = j
j =
j
.
j (i )0
1
y0T y
339,
Inverse Iteration
Introductory Remarks
Reduction to Hessenberg Form*
QR Algorithm on Hessenberg Matrices*
Inverse Iteration
Recommendation
Algorithm:
Solve [A (i )k I]y = yk .
y
kyk .
Normalize yk+1 =
Improve (i )k+1 = (i )k +
1
.
ykT y
340,
Inverse Iteration
Introductory Remarks
Reduction to Hessenberg Form*
QR Algorithm on Hessenberg Matrices*
Inverse Iteration
Recommendation
Algorithm:
Solve [A (i )k I]y = yk .
y
kyk .
Normalize yk+1 =
Improve (i )k+1 = (i )k +
1
.
ykT y
Important issues
The method may not converge for defective matrix or for one
having complex eigenvalues.
341,
Recommendation
Size
Small
(up to 4)
Symmetric
Intermediate
(say, 412)
Reduction
Definition:
Characteristic
polynomial
Jacobi sweeps
Tridiagonalization
(Givens rotation
or Householder
method)
Large
Nonsymmetric
Intermediate
Large
General
Very large
(selective
requirement)
Tridiagonalization
(usually
Householder method)
Balancing, and then
Reduction to
Hessenberg form
(Above methods or
Gaussian elimination)
Algorithm
Polynomial
root finding
(eigenvalues)
Selective
Jacobi rotations
Sturm sequence
property:
Bracketing and
bisection
(rough eigenvalues)
QR decomposition
iterations
Post-processing
Solution of
linear systems
(eigenvectors)
QR decomposition
iterations
(eigenvalues)
Inverse iteration
(eigenvectors)
Power method,
shift and deflation
Inverse iteration
(eigenvalue
improvement
and eigenvectors)
342,
Points to note
343,
Outline
344,
345,
346,
347,
348,
|
|
..
.
|
0
=
p
|
+
0
|
349,
|
|
..
.
|
0
=
p
|
+
0
|
350,
351,
352,
Properties of SVD
353,
Properties of SVD
354,
Properties of SVD
355,
Properties of SVD
Ax = UVT x = Uy = [u1
356,
ur
ur +1
1 y1
..
um ] .
r yr
= 1 y1 u1 + 2 y2 u2 + + r yr ur
Properties of SVD
Ax = UVT x = Uy = [u1
357,
ur
ur +1
1 y1
..
um ] .
r yr
= 1 y1 u1 + 2 y2 u2 + + r yr ur
Properties of SVD
vT AT Av
kAvk2
=
max
v
v
kvk2
vT v
P 2 2
cT T c
cT VT AT AVc
k k ck
.
= max
= max P
= max
2
T
T
T
c
c
c
c V Vc
c c
k ck
kAk2 = max
358,
Properties of SVD
vT AT Av
kAvk2
=
max
v
v
kvk2
vT v
P 2 2
cT T c
cT VT AT AVc
k k ck
.
= max
= max P
= max
2
T
T
T
c
c
c
c V Vc
c c
k ck
kAk2 = max
kAk =
maxc
P 2 2
k k ck
P
2
k ck
= max
359,
Properties of SVD
vT AT Av
kAvk2
=
max
v
v
kvk2
vT v
P 2 2
cT T c
cT VT AT AVc
k k ck
.
= max
= max P
= max
2
T
T
T
c
c
c
c V Vc
c c
k ck
kAk2 = max
kAk =
maxc
P 2 2
k k ck
P
2
k ck
= max
T 1
= (UV )
= V
U = V diag
1 1
1
, , ,
1 2
n
UT .
360,
Properties of SVD
vT AT Av
kAvk2
=
max
v
v
kvk2
vT v
P 2 2
cT T c
cT VT AT AVc
k k ck
.
= max
= max P
= max
2
T
T
T
c
c
c
c V Vc
c c
k ck
kAk2 = max
kAk =
P 2 2
k k ck
P
2
k ck
maxc
= max
T 1
= (UV )
Then, kA1 k =
= V
1
min
U = V diag
1 1
1
, , ,
1 2
n
UT .
361,
Properties of SVD
362,
Properties of SVD
U]
T
r 0
Vr
T
A = UV = [Ur U]
T ,
V
0 0
U = [Ur
or,
A=
Ur r VrT
r
X
k=1
k uk vkT .
363,
Properties of SVD
U]
T
r 0
Vr
T
A = UV = [Ur U]
T ,
V
0 0
U = [Ur
or,
A=
Ur r VrT
r
X
k uk vkT .
k=1
364,
SVD Algorithm
Generalized inverse: G is called a generalized
inverse or g-inverse
of A if, for b Range(A), Gb is a solution of Ax = b.
365,
SVD Algorithm
Generalized inverse: G is called a generalized
inverse or g-inverse
of A if, for b Range(A), Gb is a solution of Ax = b.
The Moore-Penrose inverse or the pseudoinverse:
A# = (UVT )# = (VT )# # U# = V# UT
366,
SVD Algorithm
Generalized inverse: G is called a generalized
inverse or g-inverse
of A if, for b Range(A), Gb is a solution of Ax = b.
The Moore-Penrose inverse or the pseudoinverse:
A# = (UVT )# = (VT )# # U# = V# UT
1
r 0
r
0
#
With =
, =
.
0 0
0
0
367,
SVD Algorithm
Generalized inverse: G is called a generalized
inverse or g-inverse
of A if, for b Range(A), Gb is a solution of Ax = b.
The Moore-Penrose inverse or the pseudoinverse:
A# = (UVT )# = (VT )# # U# = V# UT
1
r 0
r
0
#
With =
, =
.
0 0
0
0
1
|
|
2
..
.
|
0
Or, # =
p
|
+
0
|
1
for k 6= 0 or for |k | > ;
k ,
where k =
0, for k = 0 or for |k | .
368,
(A# )# = A.
If A is invertible, then A# = A1 .
369,
(A# )# = A.
If A is invertible, then A# = A1 .
370,
Pseudoinverse solution of Ax = b:
x = V U b =
r
X
k=1
k vk uT
kb
r
X
=
(uT
k b/k )vk
k=1
371,
Pseudoinverse solution of Ax = b:
x = V U b =
r
X
k=1
k vk uT
kb
r
X
=
(uT
k b/k )vk
k=1
Minimize
E (x) =
1
1
1
(Ax b)T (Ax b) = xT AT Ax xT AT b + bT b
2
2
2
372,
Pseudoinverse solution of Ax = b:
x = V U b =
r
X
k vk uT
kb
k=1
r
X
=
(uT
k b/k )vk
k=1
Minimize
E (x) =
1
1
1
(Ax b)T (Ax b) = xT AT Ax xT AT b + bT b
2
2
2
k2 vkT x = k uT
k b
vkT x = uT
k b/k
for k = 1, 2, 3, , r .
373,
= [vr +1
With V
x=
vr +2
vn ], then
r
X
(uT
k b/k )vk + Vy = x + Vy.
k=1
374,
= [vr +1
With V
x=
vr +2
vn ], then
r
X
(uT
k b/k )vk + Vy = x + Vy.
k=1
375,
= [vr +1
With V
x=
vr +2
vn ], then
r
X
(uT
k b/k )vk + Vy = x + Vy.
k=1
376,
= [vr +1
With V
x=
vr +2
vn ], then
r
X
(uT
k b/k )vk + Vy = x + Vy.
k=1
377,
378,
379,
380,
381,
Points to note
382,
Outline
383,
Group
a + b G a, b G
Associativity: a + (b + c) = (a + b) + c, a, b, c G
384,
Group
a + b G a, b G
Associativity: a + (b + c) = (a + b) + c, a, b, c G
Commutative group
Examples:(Z , +), (R, +), (Q {0}, ),
(F, +).
385,
Group
a + b G a, b G
Associativity: a + (b + c) = (a + b) + c, a, b, c G
Commutative group
Examples:(Z , +), (R, +), (Q {0}, ),
Subgroup
(F, +).
386,
Field
387,
Field
388,
Field
Subfield
389,
Vector Space
A vector space is defined by
a field F of scalars,
390,
Vector Space
A vector space is defined by
a field F of scalars,
391,
Vector Space
Suppose V is a vector space.
Take a vector 1 6= 0 in it.
392,
Vector Space
Suppose V is a vector space.
Take a vector 1 6= 0 in it.
393,
Vector Space
Suppose V is a vector space.
Take a vector 1 6= 0 in it.
394,
Vector Space
Suppose V is a vector space.
Take a vector 1 6= 0 in it.
395,
Vector Space
Suppose V is a vector space.
Take a vector 1 6= 0 in it.
396,
Vector Space
Suppose V is a vector space.
Take a vector 1 6= 0 in it.
397,
Vector Space
Suppose V is a vector space.
Take a vector 1 6= 0 in it.
398,
Vector Space
Suppose V is a vector space.
Take a vector 1 6= 0 in it.
399,
Vector Space
Suppose V is a vector space.
Take a vector 1 6= 0 in it.
400,
Vector Space
Finite dimensional vector space
401,
Vector Space
Finite dimensional vector space
402,
Vector Space
Finite dimensional vector space
403,
Vector Space
Finite dimensional vector space
Subspace
404,
Linear Transformation
A mapping T : V W satisfying
405,
Linear Transformation
A mapping T : V W satisfying
406,
Linear Transformation
A mapping T : V W satisfying
For V, basis 1 , 2 , , n
For W, basis 1 , 2 , , m
407,
Linear Transformation
A mapping T : V W satisfying
Group
Field
Vector Space
Linear Transformation
Isomorphism
Inner Product Space
Function Space
For V, basis 1 , 2 , , n
For W, basis 1 , 2 , , m
Matrix A = [a1
a2
408,
Linear Transformation
= x1 1 + x2 2 + + xn n
Coordinates in a column: x = [x1 x2 xn ]T
409,
Linear Transformation
= x1 1 + x2 2 + + xn n
Coordinates in a column: x = [x1 x2 xn ]T
Mapping:
T() = x1 T(1 ) + x2 T(2 ) + + xn T(n ),
with coordinates Ax, as we know!
410,
Linear Transformation
= x1 1 + x2 2 + + xn n
Coordinates in a column: x = [x1 x2 xn ]T
Mapping:
T() = x1 T(1 ) + x2 T(2 ) + + xn T(n ),
with coordinates Ax, as we know!
Summary:
411,
Linear Transformation
Understanding:
412,
Linear Transformation
Understanding:
413,
Linear Transformation
Understanding:
414,
Isomorphism
dim V = dim W
415,
Isomorphism
dim V = dim W
416,
Isomorphism
Consider vector spaces V and W over the same field F and of the
same dimension n.
417,
Isomorphism
Consider vector spaces V and W over the same field F and of the
same dimension n.
Question: Can we define an isomorphism between them?
418,
Isomorphism
Consider vector spaces V and W over the same field F and of the
same dimension n.
Question: Can we define an isomorphism between them?
Answer: Of course. As many as we want!
419,
Isomorphism
Consider vector spaces V and W over the same field F and of the
same dimension n.
Question: Can we define an isomorphism between them?
Answer: Of course. As many as we want!
The underlying field and the dimension together
completely specify a vector space, up to an isomorphism.
420,
Isomorphism
Consider vector spaces V and W over the same field F and of the
same dimension n.
Question: Can we define an isomorphism between them?
Answer: Of course. As many as we want!
The underlying field and the dimension together
completely specify a vector space, up to an isomorphism.
421,
a, b V, (a, b) F
422,
a, b V, (a, b) F
423,
424,
Group
Field
Vector Space
Linear Transformation
Isomorphism
Inner Product Space
Function Space
(a, a)
425,
Group
Field
Vector Space
Linear Transformation
Isomorphism
Inner Product Space
Function Space
(a, a)
426,
Group
Field
Vector Space
Linear Transformation
Isomorphism
Inner Product Space
Function Space
(a, a)
427,
Function Space
Group
Field
Vector Space
Linear Transformation
Isomorphism
Inner Product Space
Function Space
f (x3 )
f (xN )]T
428,
Function Space
Group
Field
Vector Space
Linear Transformation
Isomorphism
Inner Product Space
Function Space
f (x3 )
f (xN )]T
429,
Function Space
Group
Field
Vector Space
Linear Transformation
Isomorphism
Inner Product Space
Function Space
f (x3 )
f (xN )]T
430,
Function Space
Vector space of continuous functions
First,
431,
Function Space
Vector space of continuous functions
First,
432,
Function Space
Vector space of continuous functions
First,
433,
Function Space
434,
Function Space
435,
Function Space
436,
Function Space
Inner product: For functions f (x) and g (x) in F, the usual inner
product between corresponding vectors:
(vf , vg ) = vfT vg = f (x1 )g (x1 ) + f (x2 )g (x2 ) + f (x3 )g (x3 ) +
P
Weighted inner product: (vf , vg ) = vfT Wvg = i wi f (xi )g (xi )
437,
Function Space
Group
Field
Vector Space
Linear Transformation
Isomorphism
Inner Product Space
Function Space
Inner product: For functions f (x) and g (x) in F, the usual inner
product between corresponding vectors:
(vf , vg ) = vfT vg = f (x1 )g (x1 ) + f (x2 )g (x2 ) + f (x3 )g (x3 ) +
P
Weighted inner product: (vf , vg ) = vfT Wvg = i wi f (xi )g (xi )
(f , g ) =
438,
Function Space
Group
Field
Vector Space
Linear Transformation
Isomorphism
Inner Product Space
Function Space
Inner product: For functions f (x) and g (x) in F, the usual inner
product between corresponding vectors:
(vf , vg ) = vfT vg = f (x1 )g (x1 ) + f (x2 )g (x2 ) + f (x3 )g (x3 ) +
P
Weighted inner product: (vf , vg ) = vfT Wvg = i wi f (xi )g (xi )
(f , g ) =
Rb
Orthogonality: (f , g ) = a w (x)f (x)g (x)dx = 0
qR
b
2
Norm: kf k =
a w (x)[f (x)] dx
Orthonormal
R b basis:
(fj , fk ) = a w (x)fj (x)fk (x)dx = jk j, k
439,
Points to note
440,
Outline
441,
Gradient
f
f
(x) =
f (x)
x
x1
f
x2
f
xn
T
442,
Gradient
f
f
(x) =
f (x)
x
x1
f
x2
f
xn
T
Relationships:
f
f
=
,
ej
xj
f
= dT f (x)
d
and
f
= kf (x)k
443,
Hessian
2f
H(x) =
=
x 2
2f
x1 2
2f
x1 x2
2f
x2 x1
2f
x2 2
..
.
2f
xn x1
2f
xn x2
2f
x1 xn
2f
x2 xn
2f
xn 2
..
.
Meaning: f (x + x) f (x)
..
.
2f
(x)
x 2
..
.
444,
Hessian
2f
H(x) =
=
x 2
2f
x1 2
2f
x1 x2
2f
x2 x1
2f
x2 2
..
.
2f
xn x1
2f
xn x2
2f
x1 xn
2f
x2 xn
2f
xn 2
..
.
Meaning: f (x + x) f (x)
..
.
2f
(x)
x 2
..
.
h
xn
445,
Taylors Series
Taylors formula in the remainder form:
f (x + x) = f (x) + f (x)x
1
1
1
f (n1) (x)x n1 + f (n) (xc )x n
+ f (x)x 2 + +
2!
(n 1)!
n!
where xc = x + tx with 0 t 1
Mean value theorem: existence of xc
446,
Taylors Series
Taylors formula in the remainder form:
f (x + x) = f (x) + f (x)x
1
1
1
f (n1) (x)x n1 + f (n) (xc )x n
+ f (x)x 2 + +
2!
(n 1)!
n!
where xc = x + tx with 0 t 1
Mean value theorem: existence of xc
Taylors series:
1
f (x + x) = f (x) + f (x)x + f (x)x 2 +
2!
447,
Taylors Series
Taylors formula in the remainder form:
f (x + x) = f (x) + f (x)x
1
1
1
f (n1) (x)x n1 + f (n) (xc )x n
+ f (x)x 2 + +
2!
(n 1)!
n!
where xc = x + tx with 0 t 1
Mean value theorem: existence of xc
Taylors series:
1
f (x + x) = f (x) + f (x)x + f (x)x 2 +
2!
For a multivariate function,
1
f (x + x) = f (x) + [xT ]f (x) + [xT ]2 f (x) +
2!
1
1
[xT ]n1 f (x) + [xT ]n f (x + tx)
+
(n 1)!
n!
2
1
f
f (x + x) f (x) + [f (x)]T x + xT
(x)
x
2
x 2
448,
f
f
f
dx1 +
dx2 + +
dxn
x1
x2
xn
449,
f
f
f
dx1 +
dx2 + +
dxn
x1
x2
xn
df
dt
f
t
+ [f (x)]T dx
dt
450,
451,
f
f
f
dx1 +
dx2 + +
dxn
x1
x2
xn
f
(v, x(v)) =
vi
f
vi
T
x
f
f
x
(v, x)
=
+
+[x f (v, x)]T
x
vi
vi x
vi
x
T
x
(v) x f (v, x)
f (v, x(v)) = v f (v, x) +
v
452,
453,
454,
455,
456,
457,
(x,y ,z)
where Jacobian determinant |J(u, v , w )| = (u,v
,w ) .
458,
459,
460,
Rv
du dv
+
+
,
x
u dx
v dx
f
u x (x, t)dt.
461,
du dv
+
+
,
x
u dx
v dx
R v f
we have
x = u x (x, t)dt.
(x,t)
,
Now, considering function F (x, t) such that f (x, t) = Ft
Z v
F
(x, t)dt = F (x, v ) F (x, u) (x, u, v ).
(x) =
u t
462,
du dv
+
+
,
x
u dx
v dx
R v f
we have
x = u x (x, t)dt.
(x,t)
,
Now, considering function F (x, t) such that f (x, t) = Ft
Z v
F
(x, t)dt = F (x, v ) F (x, u) (x, u, v ).
(x) =
u t
Using
= f (x, v ) and
(x) =
v (x)
u(x)
= f (x, u),
f
dv
du
(x, t)dt + f (x, v )
f (x, u) .
x
dx
dx
Leibnitz rule
463,
Numerical Differentiation
464,
Numerical Differentiation
465,
Numerical Differentiation
466,
f (x + ei ) 2f (x) + f (x ei )
, and
2
f (x + ei + ej ) f (x + ei ej )
f (x ei + ej ) + f (x ei ej )
42
An Introduction to Tensors*
Cartesian tensors
Symmetric tensors
Tensor fields
467,
Points to note
Implicit functions
Leibnitz rule
Numerical derivatives
468,
Outline
469,
470,
ay
ax
0 az
ay
0 ax
and a = az
ay
ax
0
471,
Curves in Space
472,
Curves in Space
r
kr k
Rb
a
kdrk =
Rbp
a
r r dt
473,
Curves in Space
r
kr k
Rb
Rbp
Length of the curve: l = a kdrk = a r r dt
Arc length function
Z tp
s(t) =
r ( ) r ( ) d
with ds = kdrk =
dx 2 + dy 2 + dz 2 and
ds
dt
= kr k
474,
Curves in Space
Curve r(t) is regular if r (t) 6= 0 t.
475,
Curves in Space
Observations
ds
dt
6= 0.
476,
Curves in Space
Observations
ds
dt
6= 0.
For a unit speed curve r(s), kr (s)k = 1 and the unit tangent is
u(s) = r (s).
477,
Curves in Space
p=
1
u (s)
478,
Curves in Space
p=
1
u (s)
dkr k
du
dkr k
u(t) + kr (t)k
=
u(t) + (t)kr k2 p(t)
dt
dt
dt
479,
Curves in Space
p=
1
u (s)
dkr k
du
dkr k
u(t) + kr (t)k
=
u(t) + (t)kr k2 p(t)
dt
dt
dt
r/
Osculating plane
Centre of curvature
Radius of curvature
AC = = 1/
u
A
r//
480,
Curves in Space
Binormal: b = u p
Serret-Frenet frame: Right-handed triad {u, p, b}
481,
Curves in Space
Binormal: b = u p
Serret-Frenet frame: Right-handed triad {u, p, b}
What is p ?
482,
Curves in Space
Binormal: b = u p
Serret-Frenet frame: Right-handed triad {u, p, b}
What is p ?
Taking p = u + b,
b = u (u + b) = p.
Torsion of the curve
(s) = p(s) b (s)
483,
Curves in Space
We have u and b . What is p ?
484,
Curves in Space
485,
Curves in Space
486,
Curves in Space
487,
Surfaces*
Parametric surface equation:
N
ru rv
=
.
kNk
kru rv k
488,
Surfaces*
Parametric surface equation:
N
ru rv
=
.
kNk
kru rv k
489,
Points to note
490,
Outline
491,
+j
+k
x
y
z
is a vector,
it signifies a differentiation, and
it operates from the left side.
492,
+j
+k
x
y
z
is a vector,
it signifies a differentiation, and
it operates from the left side.
Laplacian operator:
2
2
2
+
+
x 2 y 2 z 2
= ??
Laplaces equation:
2 2 2
+ 2 + 2 =0
x 2
y
z
Solution of 2 = 0: harmonic function
493,
Gradient
grad =
i+
j+
k
x
y
z
494,
Gradient
grad =
i+
j+
k
x
y
z
Vy
Vz
Vx
+
+
x
y
z
495,
Curl
i
j
k
curl V V = x y
z
V V V
x
y
z
Vx
Vy
Vy
Vz
Vx
Vz
i+
j+
k
=
y
z
z
x
x
y
If V = r represents the velocity field, then angular velocity
=
1
curl V.
2
496,
Composite operations
Operator is linear.
( + ) = + ,
(V + W) = V + W,
(V + W) = V + W.
and
497,
Composite operations
Operator is linear.
( + ) = + ,
(V + W) = V + W,
and
(V + W) = V + W.
Considering the products , V, V W, and V W;
() = +
(V) = V + V
(V) = V + V
(V W) = (W )V + (V )W + W ( V) + V ( W)
(V W) = W ( V) V ( W)
(V W) = (W )V W( V) (V )W + V( W)
498,
Composite operations
Operator is linear.
( + ) = + ,
(V + W) = V + W,
and
(V + W) = V + W.
Considering the products , V, V W, and V W;
() = +
(V) = V + V
(V) = V + V
(V W) = (W )V + (V )W + W ( V) + V ( W)
(V W) = W ( V) V ( W)
(V W) = (W )V W( V) (V )W + V( W)
+ Vy y
+ Vz z
is an operator!
Note: the expression V Vx x
499,
Closure
div grad ()
curl grad ()
div curl V ( V)
curl curl V ( V)
grad div V ( V)
500,
Closure
div grad ()
curl grad ()
div curl V ( V)
curl curl V ( V)
grad div V ( V)
Important identities:
div grad () = 2
curl grad () = 0
div curl V ( V) = 0
curl curl V ( V)
= ( V) 2 V = grad div V 2 V
501,
502,
C V dr is Hindependent of path.
Circulation
V dr = 0 around any closed path.
curl V = 0.
Field V is conservative.
503,
504,
505,
Integral Theorems
Greens theorem in the plane
dx dy
(F1 dx + F2 dy ) =
x
y
R
C
506,
Integral Theorems
dx dy
(F1 dx + F2 dy ) =
x
y
R
C
y
y
D
y2(x)
x1(y)
R
B
R
A
c
O
x2(y)
y1(x)
b x
a
(a) Simple domain
x
(b) General domain
507,
Integral Theorems
Proof:
Z Z
R
F1
dxdy
y
Z
Z
y2 (x)
y1 (x)
a
b
F1
dydx
y
508,
Integral Theorems
Proof:
Z Z
R
F1
dxdy
y
Z
Z
y2 (x)
y1 (x)
a
b
F1
dydx
y
Z d Z x2 (y )
I
F2
F2
F2 (x, y )dy
dxdy =
dxdy =
x
C
c
R
x1 (y ) x
H
R R F2 F1
Difference: C (F1 dx + F2 dy ) = R
dx dy
x y
Z Z
509,
Integral Theorems
Proof:
Z Z
R
F1
dxdy
y
Z
Z
y2 (x)
y1 (x)
a
b
F1
dydx
y
Z d Z x2 (y )
I
F2
F2
F2 (x, y )dy
dxdy =
dxdy =
x
C
c
R
x1 (y ) x
H
R R F2 F1
Difference: C (F1 dx + F2 dy ) = R
dx dy
x y
H
R R
In alternative form, C F dr = R curl F k dx dy .
Z Z
510,
Integral Theorems
511,
Integral Theorems
512,
To show:
Fx
Fy
Fz
+
+
x
y
z
RR R
T
Fz
z dx
dy dz =
dx dy dz =
Z Z
S
R R
S
Fz nz dS
Integral Theorems
513,
Fx
Fy
Fz
+
+
x
y
z
dx dy dz =
Z Z
R R
R R R Fz
dx
dy
dz
=
Fz nz dS
To show:
z
S
T
First consider a region, the boundary of which is intersected at
most twice by any line parallel to a coordinate axis.
Integral Theorems
514,
Integral Theorems
515,
Integral Theorems
516,
Integral Theorems
517,
Integral Theorems
518,
Integral Theorems
519,
Integral Theorems
Stokess theorem
S: a piecewise smooth surface
C : boundary, a piecewise smooth simple closed curve
F(x, y , z): first order continuous vector function
Z Z
I
F dr =
curl F ndS
C
520,
Integral Theorems
521,
Stokess theorem
S: a piecewise smooth surface
C : boundary, a piecewise smooth simple closed curve
F(x, y , z): first order continuous vector function
Z Z
I
F dr =
curl F ndS
C
Integral Theorems
522,
Stokess theorem
S: a piecewise smooth surface
C : boundary, a piecewise smooth simple closed curve
F(x, y , z): first order continuous vector function
Z Z
I
F dr =
curl F ndS
C
Integral Theorems
ny = nz
z
y
f
y
1]T .
523,
Integral Theorems
ny = nz
Z Z
S
Fx
Fx
ny
nz
z
y
dS =
f
y
1]T .
z
y
Z Z
S
Fx z
Fx
+
y
z y
nz dS
Fx dx
LHS =
(x, y )dx =
dx dy =
y
R
C
C
Similar results for Fy (x, y , z)j and Fz (x, y , z)k.
524,
Points to note
525,
Outline
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
526,
Basic Principles
Fundamental theorem of algebra
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
p(x) = a0 x n + a1 x n1 + a2 x n2 + + an1 x + an
has exactly n roots x1 , x2 , , xn ; with
p(x) = a0 (x x1 )(x x2 )(x x3 ) (x xn ).
In general, roots are complex.
527,
Basic Principles
Fundamental theorem of algebra
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
p(x) = a0 x n + a1 x n1 + a2 x n2 + + an1 x + an
has exactly n roots x1 , x2 , , xn ; with
p(x) = a0 (x x1 )(x x2 )(x x3 ) (x xn ).
In general, roots are complex.
Multiplicity: A root of p(x) with multiplicity k satisfies
p(x) = p (x) = p (x) = = p (k1) (x) = 0.
528,
Basic Principles
Fundamental theorem of algebra
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
p(x) = a0 x n + a1 x n1 + a2 x n2 + + an1 x + an
has exactly n roots x1 , x2 , , xn ; with
p(x) = a0 (x x1 )(x x2 )(x x3 ) (x xn ).
In general, roots are complex.
Multiplicity: A root of p(x) with multiplicity k satisfies
p(x) = p (x) = p (x) = = p (k1) (x) = 0.
529,
Analytical Solution
Quadratic equation
2
ax + bx + c = 0 x =
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
b 2 4ac
2a
530,
Analytical Solution
Quadratic equation
2
ax + bx + c = 0 x =
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
b 2 4ac
2a
531,
Analytical Solution
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
Quadratic equation
2
ax + bx + c = 0 x =
b 2 4ac
2a
532,
Analytical Solution
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
Quadratic equation
2
ax + bx + c = 0 x =
b 2 4ac
2a
533,
Polynomial Equations
Analytical Solution
y 3 + py + q = 0
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
534,
Polynomial Equations
Analytical Solution
y 3 + py + q = 0
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
u +v
and hence
= p/3
= q
(u 3 v 3 )2 = q 2 +
4p 3
.
27
Solution:
q
u ,v =
2
3
q2 p3
+
= A, B (say).
4
27
535,
Polynomial Equations
Analytical Solution
y 3 + py + q = 0
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
u +v
and hence
= p/3
= q
(u 3 v 3 )2 = q 2 +
4p 3
.
27
Solution:
q
u ,v =
2
3
q2 p3
+
= A, B (say).
4
27
u = A1 , A1 , A1 2 , and v = B1 , B1 , B1 2
y1 = A1 + B1 , y2 = A1 + B1 2 and y3 = A1 2 + B1 .
At least one of the solutions is real!!
536,
Polynomial Equations
Analytical Solution
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
a 2
x + x =
2
2
a2
b x 2 cx d
4
537,
Polynomial Equations
Analytical Solution
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
a 2
x + x =
2
2
a2
b x 2 cx d
4
y 2
a
=
x + x+
2
2
2
a2
b+y
4
x +
ay
2
c x +
y2
d
4
538,
Polynomial Equations
Analytical Solution
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
a 2
x + x =
2
2
a2
b x 2 cx d
4
y 2
a
=
x + x+
2
2
2
a2
b+y
4
x +
ay
2
c x +
y2
d
4
2
a2
b+y
4
y2
d
4
=0
Resolvent of a quartic:
y 3 by 2 + (ac 4d)y + (4bd a2 d c 2 ) = 0
539,
Analytical Solution
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
Procedure
y
a
x 2 + x + = (ex + f ).
2
2
540,
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
541,
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
542,
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
543,
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
0 0 0
an
1 0 0 an1
..
Companion matrix: ... ... . . . ...
.
0 0 0
a2
0 0 1
a1
544,
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
Bairstows method
to separate out factors of small degree.
Attempt to separate real linear factors?
545,
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
Bairstows method
to separate out factors of small degree.
Attempt to separate real linear factors?
546,
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
Bairstows method
to separate out factors of small degree.
Attempt to separate real linear factors?
547,
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
p1 x 2 + q1 xy + r1 y 2 + u1 x + v1 y + w1 = 0
p2 x 2 + q2 xy + r2 y 2 + u2 x + v2 y + w2 = 0
Rearranging,
a1 x 2 + b1 x + c1 = 0
a2 x 2 + b2 x + c2 = 0
Cramers rule:
x2
x
1
=
=
b1 c2 b2 c1
a1 c2 a2 c1
a 1 b2 a 2 b1
a1 c2 a2 c1
b1 c2 b2 c1
=
x =
a1 c2 a2 c1
a1 b2 a2 b1
Consistency condition:
548,
Elimination Methods*
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
549,
Elimination Methods*
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
550,
Elimination Methods*
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
Bezouts method
551,
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
Advanced Techniques*
Cascaded elimination?
Objections!
Gr
obner basis representation
(algebraic geometry)
552,
Points to note
Polynomial Equations
Basic Principles
Analytical Solution
General Polynomial Equations
Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
553,
Outline
554,
555,
556,
557,
Rearrange f (x) = 0 in
the form x = g (x).
Example:
For f (x) = tan x x 3 2,
possible rearrangements:
g1 (x) = tan1 (x 3 + 2)
g2 (x) = (tan x 2)1/3
g3 (x) = tanxx2
2
Iteration: xk+1 = g (xk )
y=x
w
y = g(x)
a n
e
g
qx r
558,
559,
f(x)
a
f
g
x*
d
x0
560,
f(x)
a
f
g
x*
d
x0
561,
xk xk1
f (xk )f (xk1 ) f
(xk )
562,
xk xk1
f (xk )f (xk1 ) f
f(x0)
(xk )
x1
x2
x3
x*
x0
f(x1)
563,
564,
565,
(x0,y0 )
Quadratic
Interpolation
Inverse
Quadratic
Interpolation
(x1 ,y1 )
x3
x3
(x2 ,y2)
566,
(x0,y0 )
Quadratic
Interpolation
Inverse
Quadratic
Interpolation
(x1 ,y1 )
x3
x3
(x2 ,y2)
567,
f1 (x1 , x2 , , xn ) = 0,
f2 (x1 , x2 , , xn ) = 0,
fn (x1 , x2 , , xn ) = 0.
f(x) = 0
568,
f1 (x1 , x2 , , xn ) = 0,
f2 (x1 , x2 , , xn ) = 0,
fn (x1 , x2 , , xn ) = 0.
f(x) = 0
569,
f1 (x1 , x2 , , xn ) = 0,
f2 (x1 , x2 , , xn ) = 0,
fn (x1 , x2 , , xn ) = 0.
f(x) = 0
570,
Closure
571,
Closure
572,
Points to note
573,
Outline
Optimization: Introduction
574,
Optimization: Introduction
The Methodology of Optimization
Single-Variable Optimization
Conceptual Background of Multivariate Optimization
Optimization: Introduction
575,
Optimization methods
Sensitivity analysis
Optimization: Introduction
Single-Variable Optimization
a x1
x2
x3
576,
x4
x5
x6
Optimization: Introduction
Single-Variable Optimization
a x1
x2
x3
577,
x4
x5
x6
Optimality criteria
First order necessary condition: If x is a local minimum or
maximum point and if f (x ) exists, then f (x ) = 0.
Second order necessary condition: If x is a local minimum point
and f (x ) exists, then f (x ) 0.
Second order sufficient condition: If f (x ) = 0 and f (x ) > 0
then x is a local minimum point.
Single-Variable Optimization
Optimization: Introduction
578,
= f (x + x) f (x )
1
1
1
= f (x )x + f (x )x 2 + f (x )x 3 + f iv (x )x 4 +
2!
3!
4!
For an extremum to occur at point x , the lowest order
derivative with non-zero value should be of even order.
Single-Variable Optimization
Optimization: Introduction
579,
= f (x + x) f (x )
1
1
1
= f (x )x + f (x )x 2 + f (x )x 3 + f iv (x )x 4 +
2!
3!
4!
For an extremum to occur at point x , the lowest order
derivative with non-zero value should be of even order.
If f (x ) = 0, then
Optimization: Introduction
Single-Variable Optimization
Iterative methods of line search
Methods based on gradient root finding
Newtons method
xk+1 = xk
f (xk )
f (xk )
Secant method
xk+1 = xk
580,
xk xk1
f (xk )
(x
)
f
)
k
k1
(x
Single-Variable Optimization
Optimization: Introduction
Bracketing:
x1 < x2 < x3 with f (x1 ) f (x2 ) f (x3 )
Exhaustive search method or its variants
581,
Single-Variable Optimization
Optimization: Introduction
Bracketing:
x1 < x2 < x3 with f (x1 ) f (x2 ) f (x3 )
Exhaustive search method or its variants
Direct optimization algorithms
Fibonacci search uses a pre-defined number N, of function
evaluations, and the Fibonacci sequence
F0 = 1, F1 = 1, F2 = 2, , Fj = Fj2 + Fj1 ,
582,
51
=
0.618,
2
the golden section ratio, of interval reduction, that is
determined as the limiting case of N and the actual
number of steps is decided by the accuracy desired.
Optimization: Introduction
583,
Optimization: Introduction
584,
Optimization: Introduction
585,
Convexity
Set S R n is a convex set if
Optimization: Introduction
586,
Convexity
Set S R n is a convex set if
f(x)
f(x2)
X2
f(x1)
X1
x1
x1
x2
Optimization: Introduction
587,
Optimization: Introduction
588,
Optimization: Introduction
589,
Optimization: Introduction
590,
Quadratic function
1
q(x) = xT Ax + bT x + c
2
Gradient q(x) = Ax + b and Hessian = A is constant.
Optimization: Introduction
591,
Quadratic function
1
q(x) = xT Ax + bT x + c
2
Gradient q(x) = Ax + b and Hessian = A is constant.
Optimization: Introduction
592,
Quadratic function
1
q(x) = xT Ax + bT x + c
2
Gradient q(x) = Ax + b and Hessian = A is constant.
Optimization: Introduction
593,
Optimization Algorithms
From the current point, move to another point, hopefully better.
Optimization: Introduction
594,
Optimization Algorithms
From the current point, move to another point, hopefully better.
Which way to go? How far to go? Which decision is first?
Optimization: Introduction
595,
Optimization Algorithms
From the current point, move to another point, hopefully better.
Which way to go? How far to go? Which decision is first?
Strategies and versions of algorithms:
Trust Region: Develop a local quadratic model
1
f (xk + x) = f (xk ) + [g(xk )]T x + xT Fk x,
2
and minimize it in a small trust region around xk .
(Define trust region with dummy boundaries.)
Line search: Identify a descent direction dk and minimize the
function along it through the univariate function
() = f (xk + dk ).
Exact or accurate line search
Inexact or inaccurate line search
Optimization: Introduction
596,
Points to note
Optimization: Introduction
Convexity in optimization
Trust region
Line search
597,
Outline
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
598,
Direct Methods
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
Rosenbrocks method
Useful for functions, for which derivative either does not exist at all
points in the domain or is computationally costly to evaluate.
599,
Direct Methods
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
Rosenbrocks method
Useful for functions, for which derivative either does not exist at all
points in the domain or is computationally costly to evaluate.
Note: When derivatives are easily available, gradient-based
algorithms appear as mainstream methods.
600,
Direct Methods
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
601,
Direct Methods
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
602,
Multivariate Optimization
Direct Methods
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
n+1
X
xi
i =1,i 6=w
603,
Multivariate Optimization
Direct Methods
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
Default xnew = xr .
Revision possibilities:
f(xb)
Expansion
f(x s )
Default
f(xw)
Positive
Contraction
Negative
Contraction
x new
xw
xr
x new
xw
x r = xnew
xw
xr
xw
xr
604,
Multivariate Optimization
Direct Methods
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
Default xnew = xr .
Revision possibilities:
f(xb)
Expansion
f(x s )
Default
f(xw)
Positive
Contraction
Negative
Contraction
x new
xw
xr
x new
xw
x r = xnew
xw
xr
xw
xr
605,
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
[ or
dk = gk /kgk k]
606,
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
[ or
dk = gk /kgk k]
Minimize
() = f (xk + dk ).
Exact line search:
(k ) = [g(xk + k dk )]T dk = 0
Search direction tangential to the contour surface at (xk + k dk ).
Note: Next direction dk+1 = g(xk+1 ) orthogonal to dk
607,
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
608,
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
609,
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
610,
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
conceptual understanding
611,
Multivariate Optimization
Newtons Method
Second order approximation of a function:
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
1
f (x) f (xk ) + [g(xk )]T (x xk ) + (x xk )T H(xk )(x xk )
2
Vanishing of gradient
g(x) g(xk ) + H(xk )(x xk )
gives the iteration formula
xk+1 = xk [H(xk )]1 g(xk ).
Excellent local convergence property!
kxk+1 x k
kxk x k2
612,
Multivariate Optimization
Newtons Method
Second order approximation of a function:
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
1
f (x) f (xk ) + [g(xk )]T (x xk ) + (x xk )T H(xk )(x xk )
2
Vanishing of gradient
g(x) g(xk ) + H(xk )(x xk )
gives the iteration formula
xk+1 = xk [H(xk )]1 g(xk ).
Excellent local convergence property!
kxk+1 x k
kxk x k2
Caution: Does not have global convergence.
If H(xk ) is positive definite then dk = [H(xk )]1 g(xk )
is a descent direction.
613,
Newtons Method
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
Algorithm
1. Select x0 , tolerance and > 0. Set k = 0.
2. Evaluate gk = g(xk ) and H(xk ).
Choose , find Fk = H(xk ) + I , solve Fk dk = gk for dk .
3. Line search: obtain k to minimize () = f (xk + dk ).
Update xk+1 = xk + k dk .
4. Check convergence: If |f (xk+1 ) f (xk )| < , STOP.
Else, k k + 1 and go to step 2.
614,
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
Mk = F1
k : step of modified Newtons method
615,
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
Mk = F1
k : step of modified Newtons method
Opportunism systematized!
616,
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
Mk = F1
k : step of modified Newtons method
Opportunism systematized!
Note: Cost of evaluating the Hessian remains a bottleneck.
Useful for problems where Hessian estimates come cheap!
617,
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
n
X
k=1
xk k (i ) yi = [(i )]T x yi .
Error vector: e = Ax y
Last square fit:
Minimize E =
1
2
2
i ei
= 12 eT e
618,
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
y () = f (, x) = f (, x1 , x2 , , xn ),
square error function
E (x) =
1X 2 1X
1 T
ei =
[f (i , x) yi ]2
e e=
2
2
2
i
619,
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
y () = f (, x) = f (, x1 , x2 , , xn ),
square error function
E (x) =
1X 2 1X
1 T
ei =
[f (i , x) yi ]2
e e=
2
2
2
i
(i , x) yi ]f (i , x) = JT e
P
2
2
T
T
Hessian: H(x) = x
2 E (x) = J J +
i ei x 2 f (i , x) J J
i [f
620,
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
y () = f (, x) = f (, x1 , x2 , , xn ),
square error function
E (x) =
1X 2 1X
1 T
ei =
[f (i , x) yi ]2
e e=
2
2
2
i
(i , x) yi ]f (i , x) = JT e
P
2
2
T
T
Hessian: H(x) = x
2 E (x) = J J +
i ei x 2 f (i , x) J J
i [f
621,
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
Levenberg-Marquardt algorithm
1. Select x0 , evaluate E (x0 ). Select tolerance , initial and its
update factor. Set k = 0.
k = JT J + diag(JT J).
2. Evaluate gk and H
k x = gk . Evaluate E (xk + x).
Solve H
3. If |E (xk + x) E (xk )| < , STOP.
622,
Points to note
Multivariate Optimization
Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method
Least Square Problems
623,
Outline
624,
Conjugacy of directions:
Two vectors d1 and d2 are mutually conjugate with
respect to a symmetric matrix A, if dT
1 Ad2 = 0.
Linear independence of conjugate directions:
Conjugate directions with respect to a positive definite
matrix are linearly independent.
625,
Conjugacy of directions:
Two vectors d1 and d2 are mutually conjugate with
respect to a symmetric matrix A, if dT
1 Ad2 = 0.
Linear independence of conjugate directions:
Conjugate directions with respect to a positive definite
matrix are linearly independent.
Expanding subspace property: In R n , with conjugate vectors
{d0 , d1 , , dn1 } with respect to symmetric positive definite A,
for any x0 R n , the sequence {x0 , x1 , x2 , , xn } generated as
xk+1 = xk + k dk ,
with k =
gkT dk
,
dT
k Adk
626,
627,
T Ad
gk+1
k
dT
k Adk
T (g
gk+1
k+1 gk )
k dT
k Adk
628,
gkT gk
dT
k Adk
629,
T (g
gk+1
k+1 gk )
gkT gk
gkT gk
dT
k Adk
630,
gkT gk
dT
k Adk
Polak-Ribiere formula:
k =
T (g
gk+1
k+1 gk )
gkT gk
No need to know A!
Further,
T
T
gk+1
dk = 0 gk+1
gk = k1 (gkT + k dT
k A)dk1 = 0.
Fletcher-Reeves formula:
k =
T g
gk+1
k+1
gkT gk
631,
632,
5. Find k =
or k =
Obtain dk+1 = g k+1 + k dk .
dT d
6. If 1 kdk kk kdk+1
< D , reset g0 = gk+1 and go to step 2.
k+1 k
Else, k k + 1 and go to step 3.
633,
634,
635,
Algoithm
1. Select x0 , and a set of n linearly independent (preferably
normalized) directions d1 , d2 , , dn ; possibly di = ei .
636,
x3
z2
x2
c
x2
z1
x1
x1
b
x0
637,
x3
z2
x2
c
x2
z1
x1
x1
b
x0
638,
Quasi-Newton Methods
and
qk = gk+1 gk
qk Hpk
639,
Quasi-Newton Methods
and
qk = gk+1 gk
qk Hpk
640,
Quasi-Newton Methods
and
qk = gk+1 gk
qk Hpk
641,
Quasi-Newton Methods
and
qk = gk+1 gk
qk Hpk
pk pT
k
pT
k qk
Bk qk qT
k Bk
.
qT
k Bk qk
642,
Quasi-Newton Methods
and
pT
i Hpj = 0
for
Bk+1 Hpi = pi
for
0 i < j k,
0 i k.
643,
Quasi-Newton Methods
and
pT
i Hpj = 0
for
Bk+1 Hpi = pi
for
0 i < j k,
0 i k.
Implications:
1. Positive definiteness of inverse Hessian estimate is never lost.
2. Successive search directions are conjugate directions.
3. With B0 = I, the algorithm is a conjugate gradient method.
4. For a quadratic problem, the inverse Hessian gets completely
constructed after n steps.
644,
Quasi-Newton Methods
and
pT
i Hpj = 0
for
Bk+1 Hpi = pi
for
0 i < j k,
0 i k.
Implications:
1. Positive definiteness of inverse Hessian estimate is never lost.
2. Successive search directions are conjugate directions.
3. With B0 = I, the algorithm is a conjugate gradient method.
4. For a quadratic problem, the inverse Hessian gets completely
constructed after n steps.
Variants: Broyden-Fletcher-Goldfarb-Shanno (BFGS)
method and the Broyden family of methods
645,
Closure
646,
Newton
Levenberg-Marquardt
(Hybrid)
(Deflected Gradient)
DFP/BFGS
(Quasi-Newton)
(Variable Metric)
FR/PR
(Conjugate
Gradient)
Powell
(Direction
Set)
N
Indefinite
N
Unknown
n2
Nf
Ng
2f
2g
1H
Nf
Ng
NH
(n + 1)f
(n + 1)g
(n + 1)f
(n + 1)g
n2 f
N (2n + 1)
2n2 + 2n + 1
N (2n2 + 1)
2n2 + 3n + 1
2n2 + 3n + 1
n2
Line searches
N or 0
n2
Storage
Performance in
general problems
Practically good for
Vector
Matrix
Matrix
Matrix
Vector
Matrix
Slow
Unknown
start-up
Risky
Good
functions
Costly
NL Eqn. systems
NL least squares
Flexible
Bad
functions
Good
Large
problems
Okay
Small
problems
For Quadratic
Problems:
Convergence steps
Evaluations
Equivalent function
evaluations
Points to note
647,
Outline
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
648,
Constrained Optimization
Constraints
Constrained optimization problem:
Minimize
f (x)
subject to gi (x) 0
and hj (x) = 0
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
for i = 1, 2, , l ,
for j = 1, 2, , m,
or g(x) 0;
or h(x) = 0.
649,
Constrained Optimization
Constraints
Constrained optimization problem:
Minimize
f (x)
subject to gi (x) 0
and hj (x) = 0
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
for i = 1, 2, , l ,
for j = 1, 2, , m,
or g(x) 0;
or h(x) = 0.
650,
Constrained Optimization
Constraints
Constrained optimization problem:
Minimize
f (x)
subject to gi (x) 0
and hj (x) = 0
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
for i = 1, 2, , l ,
for j = 1, 2, , m,
or g(x) 0;
or h(x) = 0.
h
x
h
x1
hT
x2
..
.
hT
xn
= [h(x)]T .
651,
Constraints
Constraint qualification
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
652,
Constraints
Constraint qualification
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
653,
Constraints
Constraint qualification
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
654,
Constrained Optimization
Constraints
Active inequality constraints gi (x0 ) = 0:
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
655,
Constrained Optimization
Constraints
Active inequality constraints gi (x0 ) = 0:
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
656,
Optimality Criteria
Suppose x is a regular point with
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
657,
Constrained Optimization
Optimality Criteria
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
h(x )
g(a) (x )]:
d2
basis of
Rn
dk ]
658,
Constrained Optimization
Optimality Criteria
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
h(x )
g(a) (x )]:
d2
basis of
dk ]
Rn
Now, f (x ) is a vector in R n .
f (x ) = [D
z
h(x ) g(a) (x )]
(a)
659,
Constrained Optimization
Optimality Criteria
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
h(x )
g(a) (x )]:
d2
basis of
dk ]
Rn
Now, f (x ) is a vector in R n .
f (x ) = [D
z
h(x ) g(a) (x )]
(a)
660,
Constrained Optimization
Optimality Criteria
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
f (x ) = [h(x )] + [g
(a)
(x )
(i )
g (x )]
or
f (x ) + [h(x )] + [g(x )] = 0
(a)
(a)
g (x)
.
and =
where g(x) =
(i )
g(i ) (x)
(a)
(i )
661,
Constrained Optimization
Optimality Criteria
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
f (x ) = [h(x )] + [g
(a)
(x )
(i )
g (x )]
(a)
(i )
or
f (x ) + [h(x )] + [g(x )] = 0
(a)
(a)
g (x)
.
and =
where g(x) =
(i )
g(i ) (x)
Notice: g(a) (x ) = 0 and (i ) = 0
i gi (x ) = 0 i , or
T g(x ) = 0.
662,
Optimality Criteria
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
663,
Optimality Criteria
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
Combining it with i = 0,
0.
664,
Optimality Criteria
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
Combining it with i = 0,
0.
665,
Optimality Criteria
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
Combining it with i = 0,
0.
666,
Optimality Criteria
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
Lagrangian function:
L = 0
667,
Constrained Optimization
Optimality Criteria
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
Lagrangian function:
L = 0
d
T
[f (z(t)) z (t)]
dt
t=0
668,
Constrained Optimization
Optimality Criteria
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
where HL (x) =
2L
x 2
= H(x) +
j Hhj (x) +
i Hgi (x).
669,
Constrained Optimization
Optimality Criteria
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
where HL (x) =
2L
x 2
= H(x) +
j Hhj (x) +
i Hgi (x).
670,
Optimality Criteria
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
671,
Constrained Optimization
Optimality Criteria
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
672,
Sensitivity
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
673,
Sensitivity
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
674,
Sensitivity
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
Sensitivity to constraints
In particular, in a revised problem, with h(x) = c and g(x) d,
using p = c,
p f (x , p) = 0, p h(x , p) = I and p g(x , p) = 0.
c f (x (p), p) =
d f (x (p), p) = .
675,
Sensitivity
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
Sensitivity to constraints
In particular, in a revised problem, with h(x) = c and g(x) d,
using p = c,
p f (x , p) = 0, p h(x , p) = I and p g(x , p) = 0.
c f (x (p), p) =
d f (x (p), p) = .
676,
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
Duality*
Dual problem:
Reformulation of a problem in terms of the Lagrange multipliers.
Suppose x as a local minimum for the problem
Minimize f (x) subject to h(x) = 0,
with Lagrange multiplier (vector) .
f (x ) + [h(x )] = 0
677,
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
Duality*
Dual problem:
Reformulation of a problem in terms of the Lagrange multipliers.
Suppose x as a local minimum for the problem
Minimize f (x) subject to h(x) = 0,
with Lagrange multiplier (vector) .
f (x ) + [h(x )] = 0
678,
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
Duality*
679,
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
Duality*
680,
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
Duality*
681,
Duality*
Hessian of the dual function:
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
H () = x()x h(x())
Differentiating x L(x(), ) = 0, we have
x()HL (x(), ) + [x h(x())]T = 0.
Solving for x() and substituting,
H () = [x h(x())]T [HL (x(), )]1 x h(x()),
negative definite!
682,
Duality*
Hessian of the dual function:
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
H () = x()x h(x())
Differentiating x L(x(), ) = 0, we have
x()HL (x(), ) + [x h(x())]T = 0.
Solving for x() and substituting,
H () = [x h(x())]T [HL (x(), )]1 x h(x()),
negative definite!
At , x( ) = x , ( ) = h(x ) = 0, H ( ) is negative
definite and the dual function is maximized.
( ) = L(x , ) = f (x )
683,
Constrained Optimization
Duality*
Consolidation (including all constraints)
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
684,
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
685,
Points to note
Constraint qualification
KKT conditions
Constrained Optimization
Constraints
Optimality Criteria
Sensitivity
Duality*
Structure of Methods: An Overview*
686,
Outline
687,
Linear Programming
Linear Programming
Quadratic Programming
f (x) = cT x,
Ax = b, x 0;
with b 0.
688,
Linear Programming
Linear Programming
Quadratic Programming
f (x) = cT x,
Ax = b, x 0;
with b 0.
689,
Linear Programming
Linear Programming
Quadratic Programming
f (x) = cT x,
Ax = b, x 0;
with b 0.
690,
Linear Programming
Linear Programming
Quadratic Programming
691,
Linear Programming
Linear Programming
Quadratic Programming
692,
Linear Programming
Linear Programming
Quadratic Programming
693,
Linear Programming
General perspective
LP problem:
Minimize
subject to
Lagrangian:
T
f (x, y) = cT
1 x + c2 y;
A11 x + A12 y = b1 , A21 x + A22 y b2 ,
y 0.
T
L(x, y, , , ) = cT
1 x + c2 y
694,
Linear Programming
Linear Programming
Quadratic Programming
General perspective
LP problem:
Minimize
subject to
Lagrangian:
T
f (x, y) = cT
1 x + c2 y;
A11 x + A12 y = b1 , A21 x + A22 y b2 ,
y 0.
T
L(x, y, , , ) = cT
1 x + c2 y
Optimality conditions:
T
c1 + AT
11 + A21 = 0
and
T
= c2 + AT
12 + A22 0
f
Sensitivity to the constraints: f
b1 = and b2 =
695,
Linear Programming
696,
Linear Programming
Quadratic Programming
General perspective
LP problem:
Minimize
subject to
Lagrangian:
T
f (x, y) = cT
1 x + c2 y;
A11 x + A12 y = b1 , A21 x + A22 y b2 ,
y 0.
T
L(x, y, , , ) = cT
1 x + c2 y
Optimality conditions:
T
c1 + AT
11 + A21 = 0
and
T
= c2 + AT
12 + A22 0
f
Sensitivity to the constraints: f
b1 = and b2 =
Dual problem:
maximize
subject to
T
(, ) = bT
1 b2 ;
T
T
T
A11 + A21 = c1 , AT
12 + A22 c2 ,
0.
Quadratic Programming
697,
Quadratic Programming
1
f (x) = xT Qx + cT x,
2
subject to Ax = b.
b
A 0
Solution of this linear system yields the complete result!
698,
Quadratic Programming
1
f (x) = xT Qx + cT x,
2
subject to Ax = b.
b
A 0
Solution of this linear system yields the complete result!
Caution: This coefficient matrix is indefinite.
699,
Quadratic Programming
f (x) = 21 xT Qx + cT x;
A1 x = b1 ,
A2 x b2 .
700,
Quadratic Programming
Linear Programming
Quadratic Programming
f (x) = 21 xT Qx + cT x;
A1 x = b1 ,
A2 x b2 .
1
(xk + dk )T Q(xk + dk ) + cT (xk + dk )
2
1 T
=
d Qdk + (c + Qxk )T dk + f (xk ).
2 k
Since gk f (xk ) = c + Qxk , subsidiary quadratic program:
f (x) =
T
minimize 21 dT
k Qdk + gk dk
subject to Adk = 0.
701,
Quadratic Programming
Linear Programming
Quadratic Programming
1 T
2 x Qx
+ cT x,
subject to Ax b,
x 0.
702,
Quadratic Programming
Linear Programming
Quadratic Programming
1 T
2 x Qx
+ cT x,
subject to Ax b,
Ax + y = b,
x = T y = 0.
x 0.
703,
Quadratic Programming
Linear Programming
Quadratic Programming
1 T
2 x Qx
+ cT x,
subject to Ax b,
x 0.
Ax + y = b,
x = T y = 0.
Denoting
x
c
Q AT
z=
,w =
,q =
,
and M =
y
A 0
b
w Mz = q,
wT z = 0.
704,
Quadratic Programming
Linear Programming
Quadratic Programming
If q 0, then w = q, z = 0 is a solution!
705,
Quadratic Programming
Linear Programming
Quadratic Programming
If q 0, then w = q, z = 0 is a solution!
At every step, one variable is driven out of the basis and its
partner called in.
706,
Quadratic Programming
Linear Programming
Quadratic Programming
If q 0, then w = q, z = 0 is a solution!
At every step, one variable is driven out of the basis and its
partner called in.
707,
Quadratic Programming
Linear Programming
Quadratic Programming
If q 0, then w = q, z = 0 is a solution!
At every step, one variable is driven out of the basis and its
partner called in.
Very clumsy!!
708,
Points to note
709,
Outline
710,
Polynomial Interpolation
711,
Polynomial Interpolation
712,
Polynomial Interpolation
1 x0 x02 x0n
a0
f (x0 )
1 x1 x 2 x n
a1
1
1
f (x1 )
1 x2 x 2 x n
f (x2 )
a
=
2
2
..
..
.. . .
..
.
.
.
.
.
n
2
a
f
(x
n
n)
1 xn xn xn
713,
Polynomial Interpolation
1 x0 x02 x0n
a0
f (x0 )
1 x1 x 2 x n
a1
1
1
f (x1 )
1 x2 x 2 x n
f (x2 )
a
=
2
2
..
..
.. . .
..
.
.
.
.
.
n
2
a
f
(x
n
n)
1 xn xn xn
714,
Polynomial Interpolation
Lagrange interpolation
Basis functions:
Qn
j=0,j6=k (x xj )
Lk (x) = Qn
j=0,j6=k (xk xj )
=
Interpolating polynomial:
715,
Polynomial Interpolation
Lagrange interpolation
Basis functions:
Qn
j=0,j6=k (x xj )
Lk (x) = Qn
j=0,j6=k (xk xj )
=
Interpolating polynomial:
716,
Polynomial Interpolation
Two interpolation formulae
717,
Polynomial Interpolation
Two interpolation formulae
718,
Polynomial Interpolation
Two interpolation formulae
Hermite interpolation
719,
Polynomial Interpolation
Two interpolation formulae
Hermite interpolation
720,
f (xi ) f (xi 1 )
(x xi 1 ) for x [xi 1 , xi ]
xi xi 1
Handy for many uses with dense data. But, not differentiable.
721,
f (xi ) f (xi 1 )
(x xi 1 ) for x [xi 1 , xi ]
xi xi 1
Handy for many uses with dense data. But, not differentiable.
Piecewise cubic interpolation
With function values and derivatives at (n + 1) points,
n cubic Hermite segments
Data for the j-th segment:
Interpolating polynomial:
pj (x) = a0 + a1 x + a2 x 2 + a3 x 3
, f
Coefficients a0 , a1 , a2 , a3 : linear combinations of fj1 , fj , fj1
j
722,
f (xi ) f (xi 1 )
(x xi 1 ) for x [xi 1 , xi ]
xi xi 1
Handy for many uses with dense data. But, not differentiable.
Piecewise cubic interpolation
With function values and derivatives at (n + 1) points,
n cubic Hermite segments
Data for the j-th segment:
Interpolating polynomial:
pj (x) = a0 + a1 x + a2 x 2 + a3 x 3
, f
Coefficients a0 , a1 , a2 , a3 : linear combinations of fj1 , fj , fj1
j
723,
724,
1
t
qj (t) = [0 1 2 3 ]
t 2 = [g0 g1 g0 g1 ] W
t3
1
t
t 2 = Gj WT
t3
725,
1
t
qj (t) = [0 1 2 3 ]
t 2 = [g0 g1 g0 g1 ] W
t3
1
t
t 2 = Gj WT
t3
726,
727,
728,
729,
730,
Polynomial Interpolation
Interpolation of Multivariate Functions
Piecewise Polynomial Interpolation
Interpolation of Multivariate Functions
731,
Polynomial Interpolation
Interpolation of Multivariate Functions
Piecewise Polynomial Interpolation
Interpolation of Multivariate Functions
a0,0 a0,1
a1,0 a1,1
1
y
732,
Polynomial Interpolation
Interpolation of Multivariate Functions
Piecewise Polynomial Interpolation
Interpolation of Multivariate Functions
a0,0 a0,1
a1,0 a1,1
1
y
733,
Polynomial Interpolation
Interpolation of Multivariate Functions
Piecewise Polynomial Interpolation
Interpolation of Multivariate Functions
A Note on Approximation of Functions
0,0 0,1
1,0 1,1
1
v
734,
735,
Polynomial Interpolation
Interpolation of Multivariate Functions
Piecewise Polynomial Interpolation
Interpolation of Multivariate Functions
A Note on Approximation of Functions
0,0 0,1
1,0 1,1
1
v
Concisely,
U=
1
u
,V=
1
v
,W=
1 1
0
1
in which
, Gi ,j =
fi 1,j1 fi 1,j
fi ,j1
fi ,j
Polynomial Interpolation
Interpolation of Multivariate Functions
Piecewise Polynomial Interpolation
Interpolation of Multivariate Functions
f
= (xi xi 1 ) x
,
2g
uv
g
v
f
= (yj yj1 ) y
, and
2
f
= (xi xi 1 )(yj yj1 ) xy
736,
Polynomial Interpolation
Interpolation of Multivariate Functions
Piecewise Polynomial Interpolation
Interpolation of Multivariate Functions
f
= (xi xi 1 ) x
,
2g
uv
g
v
f
= (yj yj1 ) y
, and
2
f
= (xi xi 1 )(yj yj1 ) xy
g (0, 0)
g (1, 0)
Gi ,j =
gu (0, 0)
gu (1, 0)
V = [1 v v 2 v 3 ]T , and
737,
Polynomial Interpolation
A Note on Approximation of Functions
Piecewise Polynomial Interpolation
Interpolation of Multivariate Functions
A Note on Approximation of Functions
Modelling of Curves and Surfaces*
738,
Polynomial Interpolation
A Note on Approximation of Functions
Piecewise Polynomial Interpolation
Interpolation of Multivariate Functions
A Note on Approximation of Functions
Modelling of Curves and Surfaces*
Criteria:
Interpolatory approximation: Exact agreement with sampled data
Least square approximation: Minimization of a sum (or integral) of
square errors over sampled data
Minimax approximation: Limiting the largest deviation
739,
Polynomial Interpolation
A Note on Approximation of Functions
Piecewise Polynomial Interpolation
Interpolation of Multivariate Functions
A Note on Approximation of Functions
Modelling of Curves and Surfaces*
Criteria:
Interpolatory approximation: Exact agreement with sampled data
Least square approximation: Minimization of a sum (or integral) of
square errors over sampled data
Minimax approximation: Limiting the largest deviation
Basis functions:
polynomials, sinusoids, orthogonal eigenfunctions or
field-specific heuristic choice
740,
Points to note
741,
Outline
742,
f (x)dx
a
743,
f (x)dx
a
n
X
i =1
Taking
xi
744,
ba
n .
f (x)dx
a
n
X
i =1
Taking
xi
745,
ba
n .
b
a
746,
f (x)dx h
n
X
i =1
f (
xi ).
b
a
747,
f (x)dx h
n
X
f (
xi ).
i =1
f (x)dx =
f (
xi ) + f (
xi )(x xi ) + f (
xi )
+ dx
2
xi 1
xi 1
= hf (
xi ) +
third order accurate!
h5 iv
h3
f (
xi ) +
f (
xi ) + ,
24
1920
b
a
748,
f (x)dx h
n
X
f (
xi ).
i =1
f (x)dx =
f (
xi ) + f (
xi )(x xi ) + f (
xi )
+ dx
2
xi 1
xi 1
= hf (
xi ) +
h5 iv
h3
f (
xi ) +
f (
xi ) + ,
24
1920
i =1
i =1
Trapezoidal rule
Approximating function f (x) with a linear interpolation,
Z xi
h
f (x)dx [f (xi 1 ) + f (xi )]
2
xi 1
and
Z
#
n1
X
1
1
f (x)dx h f (x0 ) +
f (xi ) + f (xn ) .
2
2
"
i =1
749,
750,
Trapezoidal rule
Approximating function f (x) with a linear interpolation,
Z xi
h
f (x)dx [f (xi 1 ) + f (xi )]
2
xi 1
and
Z
#
n1
X
1
1
f (x)dx h f (x0 ) +
f (xi ) + f (xn ) .
2
2
"
i =1
h2
f (
xi )
8
h2
f (
xi ) +
8
h3
f (
xi ) +
48
h3
f (
xi ) +
48
h4 iv
f (
xi )
384
h4 iv
f (
xi ) +
384
h
h3
h5 iv
[f (xi 1 ) + f (xi )] = hf (
xi ) + f (
xi ) +
f (
xi ) +
2
8
384
Rx
h3
h5 iv
xi ) + 24
Recall xi i1 f (x)dx = hf (
f (
xi ) + 1920
f (
xi ) + .
751,
752,
Simpsons rules
Divide [a, b] into an even number (n = 2m) of intervals.
Fit a quadratic polynomial over a panel of two intervals.
For this panel of length 2h, two estimates:
M(f ) = 2hf (xi ) and T (f ) = h[f (xi 1 ) + f (xi +1 )]
h5
h3
f (xi ) + f iv (xi ) +
3
60
h5
2h3
f (xi ) f iv (xi ) +
J = T (f )
3
15
J = M(f ) +
753,
754,
Simpsons rules
Divide [a, b] into an even number (n = 2m) of intervals.
Fit a quadratic polynomial over a panel of two intervals.
For this panel of length 2h, two estimates:
M(f ) = 2hf (xi ) and T (f ) = h[f (xi 1 ) + f (xi +1 )]
h5
h3
f (xi ) + f iv (xi ) +
3
60
h5
2h3
f (xi ) f iv (xi ) +
J = T (f )
3
15
Simpsons one-third rule (with error estimate):
Z xi +1
h
h5
f (x)dx = [f (xi 1 ) + 4f (xi ) + f (xi +1 )] f iv (xi )
3
90
xi 1
J = M(f ) +
755,
Simpsons rules
Divide [a, b] into an even number (n = 2m) of intervals.
Fit a quadratic polynomial over a panel of two intervals.
For this panel of length 2h, two estimates:
M(f ) = 2hf (xi ) and T (f ) = h[f (xi 1 ) + f (xi +1 )]
h5
h3
f (xi ) + f iv (xi ) +
3
60
h5
2h3
f (xi ) f iv (xi ) +
J = T (f )
3
15
Simpsons one-third rule (with error estimate):
Z xi +1
h
h5
f (x)dx = [f (xi 1 ) + 4f (xi ) + f (xi +1 )] f iv (xi )
3
90
xi 1
J = M(f ) +
756,
To determine quantity F
using a step size h, estimate F (h)
error terms: h p , h q , h r etc (p < q < r )
F = lim0 F ()?
plot F (h), F (h), F (2 h) (with < 1) and extrapolate?
757,
To determine quantity F
using a step size h, estimate F (h)
error terms: h p , h q , h r etc (p < q < r )
F = lim0 F ()?
plot F (h), F (h), F (2 h) (with < 1) and extrapolate?
F (h) = F + chp + O(hq )
758,
To determine quantity F
using a step size h, estimate F (h)
error terms: h p , h q , h r etc (p < q < r )
F = lim0 F ()?
plot F (h), F (h), F (2 h) (with < 1) and extrapolate?
F (h) = F + chp + O(hq )
F (h) p F (h)
= F + c1 hq + O(hr )
1 p
F (2 h) p F (h)
= F + c1 (h)q + O(hr )
1 p
759,
To determine quantity F
using a step size h, estimate F (h)
error terms: h p , h q , h r etc (p < q < r )
F = lim0 F ()?
plot F (h), F (h), F (2 h) (with < 1) and extrapolate?
F (h) = F + chp + O(hq )
1
2
4
F1 (h) =
F1 (h) =
F (h) p F (h)
= F + c1 hq + O(hr )
1 p
F (2 h) p F (h)
= F + c1 (h)q + O(hr )
1 p
F2 (h) =
F1 (h)q F1 (h)
1q
= F + O(hr )
760,
To determine quantity F
using a step size h, estimate F (h)
error terms: h p , h q , h r etc (p < q < r )
F = lim0 F ()?
plot F (h), F (h), F (2 h) (with < 1) and extrapolate?
F (h) = F + chp + O(hq )
F (h) p F (h)
= F + c1 hq + O(hr )
1 p
F (2 h) p F (h)
= F + c1 (h)q + O(hr )
1 p
F2 (h) =
F1 (h)q F1 (h)
1q
Richardson extrapolation
= F + O(hr )
761,
Rb
a
f (x)dx: p = 2, q = 4, r = 6 etc
762,
Rb
a
f (x)dx: p = 2, q = 4, r = 6 etc
Further Issues
763,
(xi xi 1 )
ba
of
Further Issues
764,
(xi xi 1 )
ba
of
Further Issues
(xi xi 1 )
ba
of
Gaussian quadrature
765,
Points to note
Romberg integration
Adaptive quadrature
766,
Outline
Gaussian Quadrature
Multiple Integrals
767,
Gaussian Quadrature
P
A typical quadrature formula: a weighted sum ni=0 wi fi
fi : function value at i -th sampled point
wi : corresponding weight
Newton-Cotes formulae:
Abscissas (xi s) of sampling prescribed
Coefficients or weight values determined to eliminate
dominant error terms
768,
Gaussian Quadrature
P
A typical quadrature formula: a weighted sum ni=0 wi fi
fi : function value at i -th sampled point
wi : corresponding weight
Newton-Cotes formulae:
Abscissas (xi s) of sampling prescribed
Coefficients or weight values determined to eliminate
dominant error terms
Gaussian quadrature rules:
no prescription of quadrature points
only the number of quadrature points prescribed
locations as well as weights contribute to the accuracy criteria
769,
Gaussian Quadrature
P
A typical quadrature formula: a weighted sum ni=0 wi fi
fi : function value at i -th sampled point
wi : corresponding weight
Newton-Cotes formulae:
Abscissas (xi s) of sampling prescribed
Coefficients or weight values determined to eliminate
dominant error terms
Gaussian quadrature rules:
no prescription of quadrature points
only the number of quadrature points prescribed
locations as well as weights contribute to the accuracy criteria
with n integration points, 2n degrees of freedom
can be made exact for polynomials of degree up to 2n 1
770,
Gaussian Quadrature
P
A typical quadrature formula: a weighted sum ni=0 wi fi
fi : function value at i -th sampled point
wi : corresponding weight
Newton-Cotes formulae:
Abscissas (xi s) of sampling prescribed
Coefficients or weight values determined to eliminate
dominant error terms
Gaussian quadrature rules:
no prescription of quadrature points
only the number of quadrature points prescribed
locations as well as weights contribute to the accuracy criteria
with n integration points, 2n degrees of freedom
can be made exact for polynomials of degree up to 2n 1
best locations: interior points
open quadrature rules: can handle integrable singularities
771,
Gaussian Quadrature
Gauss-Legendre quadrature
Z
772,
Gaussian Quadrature
Gaussian Quadrature
Multiple Integrals
Gauss-Legendre quadrature
Z
1
Z 1
1
Z 1
1
1
dx = 2,
xdx = 0,
x 2 dx =
2
3
x 3 dx = 0.
773,
Gaussian Quadrature
Gaussian Quadrature
Multiple Integrals
Gauss-Legendre quadrature
Z
1
Z 1
1
Z 1
1
1
dx = 2,
xdx = 0,
x 2 dx =
2
3
x 3 dx = 0.
774,
Gaussian Quadrature
Gaussian Quadrature
Multiple Integrals
Gauss-Legendre quadrature
Z
1
Z 1
1
Z 1
1
1
dx = 2,
xdx = 0,
x 2 dx =
2
3
x 3 dx = 0.
x1 = x2 , w1 = w2 w1 = w2 = 1, x1 = 13 , x2 =
1
3
775,
Gaussian Quadrature
776,
Gaussian Quadrature
777,
Gaussian Quadrature
778,
Gaussian Quadrature
Gaussian Quadrature
Multiple Integrals
a+b ba
+
t
2
2
and
dx =
ba
dt
2
779,
Gaussian Quadrature
780,
Gaussian Quadrature
Gaussian Quadrature
Multiple Integrals
(x)
p(x)dx +
f (x)dx =
n1
X
i =0
i x
dx
781,
Gaussian Quadrature
Gaussian Quadrature
Multiple Integrals
(x)
p(x)dx +
f (x)dx =
n1
X
i =0
i x
dx
782,
Gaussian Quadrature
783,
Gaussian Quadrature
Gaussian Quadrature
Multiple Integrals
f (x)dx =
Weight values: wj =
p(x)dx =
R1
1 Lj (x)dx,
n
X
j=1
f (xj )
Lj (x)dx
1
for j = 1, 2, , n
784,
Gaussian Quadrature
785,
Gaussian Quadrature
Gaussian Quadrature
Multiple Integrals
W (x)f (x)dx =
n
X
wj f (xj )
j=1
786,
Gaussian Quadrature
787,
Gaussian Quadrature
Gauss-Chebyshev quadrature
Gauss-Laguerre quadrature
Gauss-Hermite quadrature
788,
Gaussian Quadrature
Gauss-Chebyshev quadrature
Gauss-Laguerre quadrature
Gauss-Hermite quadrature
789,
Multiple Integrals
S=
Gaussian Quadrature
Multiple Integrals
g2 (x)
f (x, y ) dy dx
g1 (x)
790,
Multiple Integrals
Gaussian Quadrature
Multiple Integrals
S=
F (x) =
g2 (x)
f (x, y ) dy dx
g1 (x)
g2 (x)
g1 (x)
f (x, y ) dy and S =
F (x)dx
791,
Multiple Integrals
Gaussian Quadrature
Multiple Integrals
S=
F (x) =
g2 (x)
f (x, y ) dy dx
g1 (x)
g2 (x)
f (x, y ) dy and S =
g1 (x)
F (x)dx
792,
Multiple Integrals
Gaussian Quadrature
Multiple Integrals
S=
F (x) =
g2 (x)
f (x, y ) dy dx
g1 (x)
g2 (x)
f (x, y ) dy and S =
g1 (x)
F (x)dx
793,
Multiple Integrals
Gaussian Quadrature
Multiple Integrals
f (x)dV
794,
Multiple Integrals
Gaussian Quadrature
Multiple Integrals
f (x)dV
Requirements:
if x ,
otherwise .
795,
Multiple Integrals
Gaussian Quadrature
Multiple Integrals
f (x)dV
Requirements:
if x ,
otherwise .
N
V X
F (xi )
N
i =1
796,
Points to note
797,
Outline
798,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Single-Step Methods
799,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Single-Step Methods
y1 = y (x1 ) = y (x0 + h) =?
Found (x1 , y1 ).
800,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Single-Step Methods
y1 = y (x1 ) = y (x0 + h) =?
801,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Single-Step Methods
y1 = y (x1 ) = y (x0 + h) =?
802,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Single-Step Methods
803,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Eulers method
dy
dx
= f (xn , yn ).
yn+1 = yn + hf (xn , yn )
Repitition of such steps constructs y (x).
Single-Step Methods
804,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Eulers method
dy
dx
= f (xn , yn ).
yn+1 = yn + hf (xn , yn )
Repitition of such steps constructs y (x).
First order truncated Taylors series:
Expected error: O(h2 )
Accumulation over steps
Total error: O(h)
Eulers method is a first order method.
Single-Step Methods
805,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Eulers method
dy
dx
= f (xn , yn ).
yn+1 = yn + hf (xn , yn )
Repitition of such steps constructs y (x).
First order truncated Taylors series:
Expected error: O(h2 )
Accumulation over steps
Total error: O(h)
Eulers method is a first order method.
Question: Total error = Sum of errors over the steps?
Answer: No, in general.
Single-Step Methods
y3
C2
y3
C3
y0
x0
x1
x2
806,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
x3
Single-Step Methods
C
C1
y3
C2
Q*
y3
y0
Q2
Q
C3
y0
C1
Q1
P1
x0
x1
x2
807,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
x3
x0
x1
Single-Step Methods
C
C1
y3
C2
Q*
y3
Q2
Q
C3
y0
y0
C1
Q1
P1
x0
x1
x2
808,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
x3
x0
x1
Single-Step Methods
Runge-Kutta methods
Second order method:
and
k1 = hf (xn , yn ), k2 = hf (xn + h, yn + k1 )
k = w1 k1 + w2 k2 ,
xn+1 = xn + h, yn+1 = yn + k
809,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Single-Step Methods
Runge-Kutta methods
Second order method:
and
810,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
k1 = hf (xn , yn ), k2 = hf (xn + h, yn + k1 )
k = w1 k1 + w2 k2 ,
xn+1 = xn + h, yn+1 = yn + k
h2
[fx (xn , yn ) + f (xn , yn )fy (xn , yn )] +
2
Single-Step Methods
Runge-Kutta methods
Second order method:
and
811,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
k1 = hf (xn , yn ), k2 = hf (xn + h, yn + k1 )
k = w1 k1 + w2 k2 ,
xn+1 = xn + h, yn+1 = yn + k
h2
[fx (xn , yn ) + f (xn , yn )fy (xn , yn )] +
2
w1 + w2 = 1, w2 = w2 =
1
2
Single-Step Methods
Runge-Kutta methods
Second order method:
and
812,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
k1 = hf (xn , yn ), k2 = hf (xn + h, yn + k1 )
k = w1 k1 + w2 k2 ,
xn+1 = xn + h, yn+1 = yn + k
h2
[fx (xn , yn ) + f (xn , yn )fy (xn , yn )] +
2
w1 + w2 = 1, w2 = w2 =
1
2
==
1
2w2 ,
w1 = 1 w2
Single-Step Methods
With continuous choice of w2 ,
813,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
k1
2)
Single-Step Methods
With continuous choice of w2 ,
814,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
= hf (xn , yn )
= hf (xn + h2 , yn + k21 )
= hf (xn + h2 , yn + k22 )
= hf (xn + h, yn + k3 )
k1
2)
815,
Single-Step Methods
Practical Implementation of Single-Step
Methods
Practical
Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
816,
Single-Step Methods
Practical Implementation of Single-Step
Methods
Practical
Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
817,
Single-Step Methods
Practical Implementation of Single-Step
Methods
Practical
Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
5
h
+ higher order terms
= yn+1 + 2c
2
818,
Single-Step Methods
Practical Implementation of Single-Step
Methods
Practical
Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
5
h
+ higher order terms
= yn+1 + 2c
2
(2)
= yn+1 yn+1
(2)
15
15 5
ch
16
(2)
(1)
16yn+1 yn+1
15
819,
Single-Step Methods
Practical Implementation of Single-Step
Methods
Practical
Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Evaluation of a step:
> : Step size is too large for accuracy.
Subdivide the interval.
<< : Step size is inefficient!
820,
Single-Step Methods
Practical Implementation of Single-Step
Methods
Practical
Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Evaluation of a step:
> : Step size is too large for accuracy.
Subdivide the interval.
<< : Step size is inefficient!
Start with a large step size.
Keep subdividing intervals whenever > .
Fast marching over smooth segments and small steps in
zones featured with rapid changes in y (x).
821,
Single-Step Methods
Practical Implementation of Single-Step
Methods
Practical
Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Evaluation of a step:
> : Step size is too large for accuracy.
Subdivide the interval.
<< : Step size is inefficient!
Start with a large step size.
Keep subdividing intervals whenever > .
Fast marching over smooth segments and small steps in
zones featured with rapid changes in y (x).
Runge-Kutta-Fehlberg method
With six function values,
An RK4 formula embedded in an RK5 formula
two independent estimates and an error estimate!
RKF45 in professional implementations
Systems of ODEs
Methods for a single first order ODE
822,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Systems of ODEs
Methods for a single first order ODE
dz
dx
823,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Systems of ODEs
Methods for a single first order ODE
dz
dx
824,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Cast into the state space form with the state vector of
dimension n = n1 + n2 + n3 + + nk
Systems of ODEs
825,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Systems of ODEs
dy
dt
826,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
2
r 2
dx
d y
+ 2x
+4 = 0
dt
dt 2
2 3/2
3
d y
xy d y
e
+ 2x + 1 = e t
y
3
dt
dt 2
dx
dt
Systems of ODEs
r 2
dx
d y
+ 2x
+4 = 0
dt
dt 2
2 3/2
3
d y
xy d y
e
+ 2x + 1 = e t
y
3
dt
dt 2
h
iT
d2y
dy
State vector: z(t) = x dx
y
2
dt
dt
dt
d 2x
y 2 3
dt
dy
dt
827,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
dx
dt
dz
dt
= f(t, z).
Multi-Step Methods*
828,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Multi-Step Methods*
829,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Multi-Step Methods*
830,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Points to note
831,
Single-Step Methods
Practical Implementation of Single-Step Methods
Systems of ODEs
Multi-Step Methods*
Outline
832,
Stability Analysis
833,
Stability Analysis
834,
Stability Analysis
Discrepancy or error:
835,
Stability Analysis
836,
Stability Analysis
2Re ()
||2
837,
Stability Analysis
2Re ()
||2
838,
Stability Analysis
Stability Analysis
Implicit Methods
Stiff Differential Equations
Boundary Value Problems
3
UNSTABLE
2
UNSTABLE
RK2
Euler
Im(h)
1
RK4
3
5
Re(h)
839,
Stability Analysis
Stability Analysis
Implicit Methods
Stiff Differential Equations
Boundary Value Problems
3
UNSTABLE
2
UNSTABLE
RK2
Euler
Im(h)
1
RK4
3
5
Re(h)
840,
Implicit Methods
Backward Eulers method
841,
Implicit Methods
Backward Eulers method
Is it worth solving?
= n + hJ(xn+1 ,
yn+1 )n+1
842,
Implicit Methods
Backward Eulers method
Is it worth solving?
= n + hJ(xn+1 ,
yn+1 )n+1
843,
Implicit Methods
Stability Analysis
Implicit Methods
Stiff Differential Equations
Boundary Value Problems
1.5
STABLE
STABLE
UNSTABLE
Im(h)
0.5
O
0.5
STABLE
1.5
2
1.5
0.5
0.5
1.5
2.5
3.5
Re(h)
844,
Implicit Methods
Stability Analysis
Implicit Methods
Stiff Differential Equations
Boundary Value Problems
1.5
STABLE
STABLE
UNSTABLE
Im(h)
0.5
O
0.5
STABLE
1.5
2
1.5
0.5
0.5
1.5
2.5
3.5
Re(h)
845,
Implicit Methods
Stability Analysis
Implicit Methods
Stiff Differential Equations
Boundary Value Problems
1.5
STABLE
STABLE
UNSTABLE
Im(h)
0.5
O
0.5
STABLE
1.5
2
1.5
0.5
0.5
1.5
2.5
3.5
Re(h)
846,
Stability Analysis
Implicit Methods
Stiff Differential Equations
Boundary Value Problems
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
x + c x + kx = 0, x(0) = 0, x(0)
=1
(a) c = 3, k = 2: x = e t e 2t
(b) c = 49, k = 600: x = e 24t e 25t
0.2
0.2
0.4
0.4
0.6
0.6
0.8
0.8
0.5
1.5
2
t
2.5
3.5
(a) Case of c = 3, k = 2
0.1
0.2
0.3
0.4
0.5
t
0.6
0.7
0.8
0.9
847,
e 2t e 300t
298
Stability Analysis
Implicit Methods
Stiff Differential Equations
Boundary Value Problems
x 10
0.1
0.2
0.3
0.4
0.5
t
0.6
0.7
0.8
0.9
848,
e 2t e 300t
298
x 10
0.1
0.2
0.3
0.4
0.5
t
0.6
0.7
0.8
0.9
x 10
Stability Analysis
Implicit Methods
Stiff Differential Equations
Boundary Value Problems
0.1
0.2
0.3
0.4
0.5
t
0.6
0.7
0.8
0.9
849,
850,
851,
Shooting method
852,
Shooting method
follows the strategy to adjust trials to hit a target.
Consider the 2-point BVP
y = f(x, y), g1 (y(a)) = 0, g2 (y(b)) = 0,
where g1 R n1 , g2 R n2 and n1 + n2 = n.
853,
E
p
854,
Computational cost
E
p
855,
Computational cost
E
p
856,
857,
858,
Points to note
859,
Outline
860,
861,
Closure
y = f (x, y ), y (x0 ) = y0
From (x, y ), the trajectory develops according to y = f (x, y ).
The new point: (x + x, y + f (x, y )x)
The slope now: f (x + x, y + f (x, y )x)
Question: Was the old direction of approach valid?
862,
Closure
y = f (x, y ), y (x0 ) = y0
From (x, y ), the trajectory develops according to y = f (x, y ).
The new point: (x + x, y + f (x, y )x)
The slope now: f (x + x, y + f (x, y )x)
Question: Was the old direction of approach valid?
With x 0, directions appropriate, if
lim f (x, y ) = f (
x , y (
x )),
x
x
863,
Closure
y = f (x, y ), y (x0 ) = y0
From (x, y ), the trajectory develops according to y = f (x, y ).
The new point: (x + x, y + f (x, y )x)
The slope now: f (x + x, y + f (x, y )x)
Question: Was the old direction of approach valid?
With x 0, directions appropriate, if
lim f (x, y ) = f (
x , y (
x )),
x
x
864,
865,
y
0
1
0
1
0
1
0
1
0
1
(x0,y0)
0
1
0
1
00000000000
11111111111
00000000000
11111111111
y +k1
0
00000000000
11111111111
00000000000
11111111111
0
0 1
00000000000
11111111111
00000000000
11111111111
0
1
00000000000
11111111111
00000000000
11111111111
0
1
00000000000
11111111111
00000000000
11111111111
Mh
0
1
k
00000000000
11111111111
00000000000
11111111111
0
1
00000000000
11111111111
00000000000
11111111111
0
1
00000000000
11111111111
00000000000
11111111111
0
1
00000000000
11111111111
00000000000
11111111111
0
1
00000000000
11111111111
00000000000
11111111111
0
1
y1
00000000000
11111111111
00000000000
11111111111
0
00000000000
11111111111
00000000000
11111111111
01
0
00000000000
11111111111
00000000000
11111111111
0
1
00000000000
11111111111
00000000000
11111111111
0
1
00000000000
00000000000
11111111111
k 11111111111
0
1
00000000000
11111111111
00000000000
11111111111
Mh
0
1
00000000000
11111111111
00000000000
11111111111
0
1
00000000000
11111111111
00000000000
11111111111
0
1
00000000000
11111111111
00000000000
11111111111
0
1
00000000000
11111111111
00000000000
11111111111
0
1
00000000000
11111111111
00000000000
11111111111
y0k
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
h
h
0
1
0
1
11111111111
00000000000
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
11111111111111111111111111111111
00000000000000000000000000000000
0
O1
x 0h
x0
x0+h
x
(a) Mh <= k
(b) Mh >= k
Guaranteed neighbourhood:
k
[x0 , x0 + ], where = min(h, M
)>0
866,
Example:
y =
Function f (x, y ) =
y 1
x
y 1
, y (0) = 1
x
867,
Example:
y =
Function f (x, y ) =
y 1
x
y 1
, y (0) = 1
x
868,
Example:
y =
Function f (x, y ) =
y 1
x
y 1
, y (0) = 1
x
869,
Example:
y =
Function f (x, y ) =
y 1
x
y 1
, y (0) = 1
x
870,
871,
872,
873,
874,
875,
Well-posed IVP:
An initial value problem is said to be well-posed if there
exists a solution to it, the solution is unique and it
depends continuously on the initial conditions.
876,
Uniqueness Theorems
Lipschitz condition:
877,
Uniqueness Theorems
Lipschitz condition:
878,
Uniqueness Theorems
Lipschitz condition:
879,
Uniqueness Theorems
E (x)
2L
E (x)
x0
E (x)
dx 2L(x x0 )
E (x)
880,
Uniqueness Theorems
E (x)
2L
E (x)
x0
E (x)
dx 2L(x x0 )
E (x)
881,
Uniqueness Theorems
E (x)
2L
E (x)
x0
E (x)
dx 2L(x x0 )
E (x)
882,
Uniqueness Theorems
E (x)
2L
E (x)
x0
E (x)
dx 2L(x x0 )
E (x)
883,
Uniqueness Theorems
f
are continuous and
Picards theorem: If f (x, y ) and y
bounded on a rectangle
R = {(x, y ) : a < x < b, c < y < d}, then for every
(x0 , y0 ) R, the IVP y = f (x, y ), y (x0 ) = y0 has a
unique solution in some neighbourhood |x x0 | h.
884,
Uniqueness Theorems
f
are continuous and
Picards theorem: If f (x, y ) and y
bounded on a rectangle
R = {(x, y ) : a < x < b, c < y < d}, then for every
(x0 , y0 ) R, the IVP y = f (x, y ), y (x0 ) = y0 has a
unique solution in some neighbourhood |x x0 | h.
885,
Uniqueness Theorems
f
are continuous and
Picards theorem: If f (x, y ) and y
bounded on a rectangle
R = {(x, y ) : a < x < b, c < y < d}, then for every
(x0 , y0 ) R, the IVP y = f (x, y ), y (x0 ) = y0 has a
unique solution in some neighbourhood |x x0 | h.
886,
Lipschitz condition:
kf(x, y) f(x, z)k Lky zk
f
y
Partial derivative
887,
888,
With z1 = z2 , z2 = z3 , , zn1
= zn and zn from the ODE,
889,
Closure
890,
Closure
891,
Closure
892,
Closure
893,
Points to note
894,
Outline
895,
896,
Example: y (x) = cx k
897,
Example: y (x) = cx k
With
dy
dx
= ckx k1 and
d2y
dx 2
= ck(k 1)x k2 ,
d 2y
xy 2 = x
dx
dy
dx
2
dy
dx
898,
Example: y (x) = cx k
With
dy
dx
= ckx k1 and
d2y
dx 2
= ck(k 1)x k2 ,
d 2y
xy 2 = x
dx
dy
dx
2
dy
dx
899,
Example: y (x) = cx k
With
dy
dx
= ckx k1 and
d2y
dx 2
= ck(k 1)x k2 ,
d 2y
xy 2 = x
dx
dy
dx
2
dy
dx
Separation of Variables
ODE form with separable variables:
y = f (x, y )
900,
(x)
dy
=
or (y )dy = (x)dx
dx
(y )
Separation of Variables
ODE form with separable variables:
y = f (x, y )
901,
(x)
dy
=
or (y )dy = (x)dx
dx
(y )
Solution as quadrature:
Z
Z
(y )dy = (x)dx + c.
Separation of Variables
ODE form with separable variables:
y = f (x, y )
902,
(x)
dy
=
or (y )dy = (x)dx
dx
(y )
Solution as quadrature:
Z
Z
(y )dy = (x)dx + c.
Separation of variables through substitution
Example:
y = g (x + y + )
Substitute v = x + y + to arrive at
Z
dv
dv
= + g (v ) x =
+c
dx
+ g (v )
903,
f1 (x, y )
y =
f2 (x, y )
904,
905,
=
du
dx
2 (y /x)
dx
2 (u)
x
1 (u) u2 (u)
906,
=
du
dx
2 (y /x)
dx
2 (u)
x
1 (u) u2 (u)
For y =
a1 x+b1 y +c1
a2 x+b2 y +c2 ,
coordinate shift
x = X + h, y = Y + k y =
dy
dY
=
dx
dX
produces
a1 X + b1 Y + (a1 h + b1 k + c1 )
dY
=
.
dX
a2 X + b2 Y + (a2 h + b2 k + c2 )
Choose h and k such that
a1 h + b1 k + c1 = 0 = a2 h + b2 k + c2 .
If the system is inconsistent, then substitute u = a x + b y .
dp
dp
+ f (p)
dx
dx
907,
dp
[x + f (p)] = 0
dx
dp
dp
+ f (p)
dx
dx
dp
dx = 0 means y = p = m (constant)
family of straight lines y = mx +
908,
dp
[x + f (p)] = 0
dx
dp
dp
+ f (p)
dx
dx
dp
dx = 0 means y = p = m (constant)
family of straight lines y = mx +
dp
[x + f (p)] = 0
dx
Singular solution:
x = f (p) and
909,
y = f (p) pf (p)
910,
911,
dp dy
dp
dp
dp
=
=p
f (y , p, p ) = 0.
dx
dy dx
dy
dy
912,
dp dy
dp
dp
dp
=
=p
f (y , p, p ) = 0.
dx
dy dx
dy
dy
913,
and N =
,
y
or,
M
N
=
y
x
914,
and N =
,
y
or,
M
N
=
y
x
M
y
N
x
915,
and N =
,
y
or,
M
N
=
y
x
and N(x, y ) =
M
y
y ,
dx +
dy = 0 d = 0.
x
y
Solution: (x, y ) = c
N
x
916,
and N =
,
y
or,
M
N
=
y
x
and N(x, y ) =
M
y
N
x
y ,
dx +
dy = 0 d = 0.
x
y
Solution: (x, y ) = c
Working rule:
Z
Z
1 (x, y ) = M(x, y )dx+g1 (y ) and 2 (x, y ) = N(x, y )dy +g2 (x)
Determine g1 (y ) and g2 (x) from 1 (x, y ) = 2 (x, y ) = (x, y ).
917,
and N =
,
y
or,
M
N
=
y
x
and N(x, y ) =
M
y
N
x
y ,
dx +
dy = 0 d = 0.
x
y
Solution: (x, y ) = c
Working rule:
Z
Z
1 (x, y ) = M(x, y )dx+g1 (y ) and 2 (x, y ) = N(x, y )dy +g2 (x)
Determine g1 (y ) and g2 (x) from 1 (x, y ) = 2 (x, y ) = (x, y ).
N
If M
y 6= x , but y (FM) = x (FN)?
F : Integrating factor
918,
919,
dy
d
dF
+ F (x)P(x)y =
[F (x)y ]
= F (x)P(x).
dx
dx
dx
920,
P(x)dx
921,
P(x)dx
P(x)dx
Q(x)e
P(x)dx
dx + C
922,
dy
+ P(x)y = Q(x)y
dx
dy
+ P(x)y = Q(x)y
dx
Substitution: z = y 1k ,
923,
dz
dx
= (1 k)y k dy
dx gives
dz
+ (1 k)P(x)z = (1 k)Q(x),
dx
in the Leibnitz form.
dy
+ P(x)y = Q(x)y
dx
Substitution: z = y 1k ,
924,
dz
dx
= (1 k)y k dy
dx gives
dz
+ (1 k)P(x)z = (1 k)Q(x),
dx
in the Leibnitz form.
Riccati equation
y = a(x) + b(x)y + c(x)y 2
If one solution y1 (x) is known, then propose y (x) = y1 (x) + z(x).
dy
+ P(x)y = Q(x)y
dx
Substitution: z = y 1k ,
925,
dz
dx
= (1 k)y k dy
dx gives
dz
+ (1 k)P(x)z = (1 k)Q(x),
dx
in the Leibnitz form.
Riccati equation
y = a(x) + b(x)y + c(x)y 2
If one solution y1 (x) is known, then propose y (x) = y1 (x) + z(x).
y1 (x) + z (x) = a(x) + b(x)[y1 (x) + z(x)] + c(x)[y1 (x) + z(x)]2
Since y1 (x) = a(x) + b(x)y1 (x) + c(x)[y1 (x)]2 ,
z (x) = [b(x) + 2c(x)y1 (x)]z(x) + c(x)[z(x)]2 ,
in the form of Bernoullis equation.
Orthogonal Trajectories
926,
Orthogonal Trajectories
927,
Orthogonal Trajectories
928,
Points to note
Separating variables
929,
Outline
930,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
Introduction
931,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
Introduction
932,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
933,
Introduction
Homogeneous Equations with Constant
Coefficients
Homogeneous
Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
y + ay + by = 0
934,
Introduction
Homogeneous Equations with Constant
Coefficients
Homogeneous
Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
y + ay + by = 0
Assume
y = e x
y = e x and y = 2 e x .
Substitution: (2 + a + b)e x = 0
Auxiliary equation:
2 + a + b = 0
Solve for 1 and 2 :
Solutions: e 1 x and e 2 x
935,
Introduction
Homogeneous Equations with Constant
Coefficients
Homogeneous
Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
y + ay + by = 0
Assume
y = e x
y = e x and y = 2 e x .
Substitution: (2 + a + b)e x = 0
Auxiliary equation:
2 + a + b = 0
Solve for 1 and 2 :
Solutions: e 1 x and e 2 x
Three cases
936,
Introduction
Homogeneous Equations with Constant
Coefficients
Homogeneous
Equations with Constant Coefficients
Euler-Cauchy Equation
(a2
= 4b): 1 = 2 = = a2
937,
Introduction
Homogeneous Equations with Constant
Coefficients
Homogeneous
Equations with Constant Coefficients
Euler-Cauchy Equation
(a2
= 4b): 1 = 2 = = a2
938,
Introduction
Homogeneous Equations with Constant
Coefficients
Homogeneous
Equations with Constant Coefficients
Euler-Cauchy Equation
(a2
= 4b): 1 = 2 = = a2
y (x) = c1 e ( 2 +i )x + c2 e ( 2 i )x
ax
with A = c1 + c2 , B = i (c1 c2 ).
939,
Introduction
Homogeneous Equations with Constant
Coefficients
Homogeneous
Equations with Constant Coefficients
Euler-Cauchy Equation
(a2
= 4b): 1 = 2 = = a2
y (x) = c1 e ( 2 +i )x + c2 e ( 2 i )x
ax
with A = c1 + c2 , B = i (c1 c2 ).
ax
Euler-Cauchy Equation
940,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
x 2 y + axy + by = 0
Substituting y = x k , auxiliary (or indicial) equation:
k 2 + (a 1)k + b = 0
Euler-Cauchy Equation
941,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
x 2 y + axy + by = 0
Substituting y = x k , auxiliary (or indicial) equation:
k 2 + (a 1)k + b = 0
1. Roots real and distinct [(a 1)2 > 4b]: k1 6= k2 .
y (x) = c1 x k1 + c2 x k2 .
a1
2
a1
2
cos( ln x).
Euler-Cauchy Equation
942,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
x 2 y + axy + by = 0
Substituting y = x k , auxiliary (or indicial) equation:
k 2 + (a 1)k + b = 0
1. Roots real and distinct [(a 1)2 > 4b]: k1 6= k2 .
y (x) = c1 x k1 + c2 x k2 .
a1
2
a1
2
cos( ln x).
943,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
y + P(x)y + Q(x)y = 0
Well-posedness of its IVP:
The initial value problem of the ODE, with arbitrary
initial conditions y (x0 ) = Y0 , y (x0 ) = Y1 , has a unique
solution, as long as P(x) and Q(x) are continuous in the
interval under question.
944,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
y + P(x)y + Q(x)y = 0
Well-posedness of its IVP:
The initial value problem of the ODE, with arbitrary
initial conditions y (x0 ) = Y0 , y (x0 ) = Y1 , has a unique
solution, as long as P(x) and Q(x) are continuous in the
interval under question.
At least two linearly independent solutions:
945,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
y + P(x)y + Q(x)y = 0
Well-posedness of its IVP:
The initial value problem of the ODE, with arbitrary
initial conditions y (x0 ) = Y0 , y (x0 ) = Y1 , has a unique
solution, as long as P(x) and Q(x) are continuous in the
interval under question.
At least two linearly independent solutions:
946,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
947,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
948,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
949,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
950,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Complete solution:
If y1 (x) and y2 (x) are two linearly independent solutions,
then the general solution is
y (x) = c1 y1 (x) + c2 y2 (x).
951,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
952,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
y1 (x0 ) y2 (x0 )
y1 (x0 ) y2 (x0 )
coefficient matrix is singular.
c1
c2
= 0,
= 0,
953,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
= 0,
y1 (x0 ) y2 (x0 )
c1
= 0,
y1 (x0 ) y2 (x0 )
c2
coefficient matrixis singular.
c1
Choose non-zero
and frame y (x) = c1 y1 + c2 y2 , satisfying
c2
IVP y + Py + Qy = 0, y (x0 ) = 0, y (x0 ) = 0.
954,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
= 0,
y1 (x0 ) y2 (x0 )
c1
= 0,
y1 (x0 ) y2 (x0 )
c2
coefficient matrixis singular.
c1
Choose non-zero
and frame y (x) = c1 y1 + c2 y2 , satisfying
c2
IVP y + Py + Qy = 0, y (x0 ) = 0, y (x0 ) = 0.
955,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
C1
C2
.
956,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
C1
C2
.
957,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
C1
C2
.
958,
Introduction
Theory of the Homogeneous Equations
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
C1
C2
.
959,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
960,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
961,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
962,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
963,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
Then,
u = U =
Integrating,
u(x) =
and
y2 (x) = y1 (x)
1 R Pdx
e
,
y12
1 R Pdx
e
dx,
y12
Z
964,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
1 R Pdx
e
dx.
y12
Then,
u = U =
Integrating,
u(x) =
and
y2 (x) = y1 (x)
1 R Pdx
e
,
y12
1 R Pdx
e
dx,
y12
Z
1 R Pdx
e
dx.
y12
965,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
966,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
967,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
968,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
969,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
Points to note
Solution basis
Reduction of order
970,
Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations
Basis for Solutions
Outline
971,
x x satisfies Ax = 0,
is in null space of A.
Complete solution:
P
x = x + i ci (x0 )i
Methodology:
Find null space of A
i.e. basis members (x0 )i .
Find x and compose.
972,
973,
974,
975,
y + ay + by = R(x)
976,
y + ay + by = R(x)
977,
y + ay + by = R(x)
978,
y + ay + by = R(x)
979,
y + ay + by = R(x)
980,
Example:
(a) y 6y + 5y = e 3x
(b) y 5y + 6y = e 3x
(c) y 6y + 9y = e 3x
Closure
981,
Example:
(a) y 6y + 5y = e 3x
(b) y 5y + 6y = e 3x
(c) y 6y + 9y = e 3x
Closure
982,
Example:
(a) y 6y + 5y = e 3x
(b) y 5y + 6y = e 3x
(c) y 6y + 9y = e 3x
Closure
983,
Example:
(a) y 6y + 5y = e 3x
(b) y 5y + 6y = e 3x
(c) y 6y + 9y = e 3x
Closure
(b) y (x) = c1 e 2x + c2 e 3x +
984,
Example:
(a) y 6y + 5y = e 3x
(b) y 5y + 6y = e 3x
(c) y 6y + 9y = e 3x
Closure
(b) y (x) = c1 e 2x + c2 e 3x + xe 3x
985,
Example:
(a) y 6y + 5y = e 3x
(b) y 5y + 6y = e 3x
(c) y 6y + 9y = e 3x
Closure
(b) y (x) = c1 e 2x + c2 e 3x + xe 3x
(c) y (x) = c1 e 3x + c2 xe 3x +
986,
Example:
(a) y 6y + 5y = e 3x
(b) y 5y + 6y = e 3x
(c) y 6y + 9y = e 3x
(b) y (x) = c1 e 2x + c2 e 3x + xe 3x
(c) y (x) = c1 e 3x + c2 xe 3x +
1 2 3x
2x e
987,
Example:
(a) y 6y + 5y = e 3x
(b) y 5y + 6y = e 3x
(c) y 6y + 9y = e 3x
(b) y (x) = c1 e 2x + c2 e 3x + xe 3x
(c) y (x) = c1 e 3x + c2 xe 3x +
1 2 3x
2x e
Modification rule
988,
989,
990,
yp = u1 y1 + u1 y1 + u2 y2 + u2 y2 .
991,
yp = u1 y1 + u1 y1 + u2 y2 + u2 y2 .
Condition
u1 y1 + u2 y2 = 0 gives
yp = u1 y1 + u2 y2 .
992,
yp = u1 y1 + u1 y1 + u2 y2 + u2 y2 .
Condition
u1 y1 + u2 y2 = 0 gives
yp = u1 y1 + u2 y2 .
Differentiating,
yp = u1 y1 + u2 y2 + u1 y1 + u2 y2 .
993,
994,
yp = u1 y1 + u1 y1 + u2 y2 + u2 y2 .
Condition
u1 y1 + u2 y2 = 0 gives
yp = u1 y1 + u2 y2 .
Differentiating,
yp = u1 y1 + u2 y2 + u1 y1 + u2 y2 .
Substitution into the ODE:
u1 y1 +u2 y2 +u1 y1 +u2 y2 +P(x)(u1 y1 +u2 y2 )+Q(x)(u1 y1 +u2 y2 ) = R(x)
Rearranging,
u1 y1 +u2 y2 +u1 (y1 +P(x)y1 +Q(x)y1 )+u2 (y2 +P(x)y2 +Q(x)y2 ) = R(x).
As y1 and y2 satisfy the associated HE, u1 y1 + u2 y2 = R(x)
y1 y2
y1 y2
u1
u2
0
R
995,
y1 y2
y1 y2
u1
u2
0
R
y2 R
W
and u2 =
y1 R
.
W
Direct quadrature:
Z
Z
y2 (x)R(x)
y1 (x)R(x)
dx and u2 (x) =
dx
u1 (x) =
W [y1 (x), y2 (x)]
W [y1 (x), y2 (x)]
996,
y1 y2
y1 y2
u1
u2
0
R
y2 R
W
and u2 =
y1 R
.
W
Direct quadrature:
Z
Z
y2 (x)R(x)
y1 (x)R(x)
dx and u2 (x) =
dx
u1 (x) =
W [y1 (x), y2 (x)]
W [y1 (x), y2 (x)]
In contrast to the method of undetermined multipliers,
variation of parameters is general. It is applicable for all
continuous functions as P(x), Q(x) and R(x).
997,
Points to note
998,
Outline
999,
1000,
y (n) +P1 (x)y (n1) +P2 (x)y (n2) + +Pn1 (x)y +Pn (x)y = R(x)
y (n) +P1 (x)y (n1) +P2 (x)y (n2) + +Pn1 (x)y +Pn (x)y = 0
1001,
y (n) +P1 (x)y (n1) +P2 (x)y (n2) + +Pn1 (x)y +Pn (x)y = R(x)
y (n) +P1 (x)y (n1) +P2 (x)y (n2) + +Pn1 (x)y +Pn (x)y = 0
y1
y2
yn
y1
y2
yn
y1
y
y
n
2
Y(x) =
.
.
..
..
..
..
.
.
.
(n1)
(n1)
(n1)
y1
y2
yn
1002,
y (n) +P1 (x)y (n1) +P2 (x)y (n2) + +Pn1 (x)y +Pn (x)y = R(x)
y (n) +P1 (x)y (n1) +P2 (x)y (n2) + +Pn1 (x)y +Pn (x)y = 0
y1
y2
yn
y1
y2
yn
y1
y
y
n
2
Y(x) =
.
.
..
..
..
..
.
.
.
(n1)
(n1)
(n1)
y1
y2
yn
Wronskian:
W (y1 , y2 , , yn ) = det[Y(x)]
1003,
i =1
i =1
1004,
i =1
1005,
1006,
n + a1 n1 + a2 n2 + + an1 + an = 0
1007,
n + a1 n1 + a2 n2 + + an1 + an = 0
Construction of the basis:
1. For every simple real root = , e x is a solution.
2. For every simple pair of complex roots = i ,
e x cos x and e x sin x are linearly independent solutions.
3. For every real root = of multiplicity r ; e x , xe x , x 2 e x ,
, x r 1 e x are all linearly independent solutions.
4. For every complex pair of roots = i of multiplicity r ;
e x cos x, e x sin x, xe x cos x, xe x sin x, ,
x r 1 e x cos x, x r 1 e x sin x are the required solutions.
Non-Homogeneous Equations
Method of undetermined coefficients
1008,
Non-Homogeneous Equations
Method of undetermined coefficients
1009,
Non-Homogeneous Equations
Method of undetermined coefficients
Imposed
condition
Pn
i =1 ui (x)yi (x) = 0
1010,
Derivative
P
yp (x) = ni=1 ui (x)yi (x)
Non-Homogeneous Equations
Method of undetermined coefficients
1011,
Imposed
condition
Derivative
Pn
P
u (x)yi (x) = 0
yp (x) = ni=1 ui (x)yi (x)
P
Pin=1 i
P
Pn
(n1)
(n1)
(n2)
(x)
(x) = ni=1 ui (x)yi
(x) = 0 yp
i =1 ui (x)yi
P
P
(n)
(n1)
(n)
(x) + ni=1 ui (x)yi (x)
Finally, yp (x) = ni=1 ui (x)yi
n
n
i
h
X
X
(n1)
(n)
(n1)
+ + Pn yi = R(x)
ui (x) yi + P1 yi
(x)+
ui (x)yi
Non-Homogeneous Equations
Since each yi (x) is a solution of the HE,
n
X
(n1)
ui (x)yi
1012,
(x) = R(x).
i =1
Non-Homogeneous Equations
Since each yi (x) is a solution of the HE,
n
X
(n1)
ui (x)yi
1013,
(x) = R(x).
i =1
adj Y
det(Y) ,
1
R(x)
[adj Y(x)]en R(x) =
[last column of adj Y(x)].
det[Y(x)]
W (x)
Wi (x)
R(x),
W (x)
Non-Homogeneous Equations
Since each yi (x) is a solution of the HE,
n
X
(n1)
ui (x)yi
1014,
(x) = R(x).
i =1
adj Y
det(Y) ,
1
R(x)
[adj Y(x)]en R(x) =
[last column of adj Y(x)].
det[Y(x)]
W (x)
Wi (x)
R(x),
W (x)
Points to note
1015,
Outline
Laplace Transforms
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
Laplace Transforms
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
1016,
Introduction
Classical perspective
Laplace Transforms
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
1017,
Introduction
Classical perspective
Laplace Transforms
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
A practical situation
You have a plant
1018,
Introduction
Classical perspective
Laplace Transforms
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
A practical situation
You have a plant
Implication
1019,
Introduction
Laplace Transforms
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
1020,
Introduction
Laplace Transforms
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
1021,
Laplace Transforms
Introduction
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
s: frequency variable
K (s, t): kernel of the transform
Note: T [f (t)] is a function of s, not t.
1022,
Laplace Transforms
Introduction
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
e st f (t)dt = lim
b 0
e st f (t)dt
1023,
Laplace Transforms
Introduction
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
e st f (t)dt = lim
b 0
e st f (t)dt
1024,
Laplace Transforms
Introduction
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
e st f (t)dt = lim
b 0
e st f (t)dt
1025,
Laplace Transforms
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
1026,
Laplace Transforms
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
,
L(sin t) = 2
;
2
+
s + 2
s
a
L(cosh at) = 2
,
L(sinh at) = 2
;
2
s a
s a2
s
, L(e t sin t) =
.
L(e t cos t) =
2
2
(s ) +
(s )2 + 2
L(cos t) =
s2
1027,
Laplace Transforms
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
,
L(sin t) = 2
;
2
+
s + 2
s
a
L(cosh at) = 2
,
L(sinh at) = 2
;
2
s a
s a2
s
, L(e t sin t) =
.
L(e t cos t) =
2
2
(s ) +
(s )2 + 2
L(cos t) =
s2
L{f (t)} =
e st f (t)dt
0
Z
= e st f (t) 0 + s
L{f (n) (t)} = s n L{f (t)} s (n1) f (0) s (n2) f (0) f (n1) (0).
1028,
Laplace Transforms
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
,
L(sin t) = 2
;
2
+
s + 2
s
a
L(cosh at) = 2
,
L(sinh at) = 2
;
2
s a
s a2
s
, L(e t sin t) =
.
L(e t cos t) =
2
2
(s ) +
(s )2 + 2
L(cos t) =
s2
L{f (t)} =
e st f (t)dt
0
Z
= e st f (t) 0 + s
L{f (n) (t)} = s n L{f (t)} s (n1) f (0) s (n2) f (0) f (n1) (0).
Rt
For integral g (t) = 0 f (t)dt, g (0) = 0, and
L{g (t)} = sL{g (t)}g (0) = sL{g (t)} L{g (t)} = 1s L{f (t)}.
1029,
Laplace Transforms
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
Example:
Initial value problem of a linear constant coefficient ODE
y + ay + by = r (t), y (0) = K0 , y (0) = K1
1030,
Laplace Transforms
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
Example:
Initial value problem of a linear constant coefficient ODE
y + ay + by = r (t), y (0) = K0 , y (0) = K1
Laplace transforms of both sides of the ODE:
1031,
Laplace Transforms
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
Example:
Initial value problem of a linear constant coefficient ODE
y + ay + by = r (t), y (0) = K0 , y (0) = K1
Laplace transforms of both sides of the ODE:
1
Y (s)
= 2
(in this case)
R(s)
s + as + b
1032,
Laplace Transforms
Handling Discontinuities
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
0 if
1 if
t <a
t >a
a
0dt +
e st dt =
e as
s
1033,
Laplace Transforms
Handling Discontinuities
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
0 if
1 if
t <a
t >a
a
0dt +
Za
0 if
f (t a) if
e st dt =
e as
s
t<a
t>a
e st f (t a)dt
e s(a+ ) f ( )d = e as L{f (t)}.
1034,
Laplace Transforms
Handling Discontinuities
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
Define
at a+k
otherwise
1
1
u(t a) u(t a k)
k
k
fk (t a) =
=
u(ta)
1 u(ta)
k
f (ta)
(ta)
k
1
1/k if
0
k
1
a+k
a a+k
1
1
1 u(tak)
k
(b) Composition
(c) Function f
1035,
Laplace Transforms
Handling Discontinuities
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
Define
at a+k
otherwise
1
1
u(t a) u(t a k)
k
k
fk (t a) =
=
u(ta)
1 u(ta)
k
f (ta)
(ta)
k
1
1/k if
0
k
1
a+k
a a+k
1
1
1 u(tak)
k
(b) Composition
(c) Function f
a+k
a
1
dt = 1.
k
1036,
Laplace Transforms
Handling Discontinuities
In the limit,
(t a) =
or,
1037,
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
lim fk (t a)
if t = a
(t a) =
0
otherwise
k0
and
(t a)dt = 1.
Laplace Transforms
Handling Discontinuities
In the limit,
(t a) =
or,
1038,
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
lim fk (t a)
if t = a
(t a) =
0
otherwise
k0
and
(t a)dt = 1.
1
[L{u(t a)} L{u(t a k)}]
k0 k
e as e (a+k)s
= lim
= e as
k0
ks
L{(t a)} =
lim
Laplace Transforms
Handling Discontinuities
In the limit,
(t a) =
or,
1039,
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
lim fk (t a)
if t = a
(t a) =
0
otherwise
k0
and
(t a)dt = 1.
1
[L{u(t a)} L{u(t a k)}]
k0 k
e as e (a+k)s
= lim
= e as
k0
ks
L{(t a)} =
lim
Laplace Transforms
Convolution
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
1040,
Laplace Transforms
Convolution
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
t=
t=
11
00
00
11
00
11
00
11
00
11
00
11
00
11
(a) Original order
11111
00000
00000
11111
t
1041,
Laplace Transforms
Convolution
1042,
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
f ( )
11
00
00
11
00
11
00
11
00
11
00
11
00
11
(a) Original order
e st g (t )dt d
t=
t=
11111
00000
00000
11111
t
Laplace Transforms
Convolution
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
Through substitution t = t ,
Z
Z
e s(t + ) g (t ) dt d
f ( )
H(s) =
0
Z
Z0
st
s
g (t ) dt d
e
f ( )e
=
0
1043,
Laplace Transforms
Convolution
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
Through substitution t = t ,
Z
Z
e s(t + ) g (t ) dt d
f ( )
H(s) =
0
Z
Z0
st
s
g (t ) dt d
e
f ( )e
=
0
1044,
Points to note
Laplace Transforms
Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities
Convolution
Advanced Issues
1045,
Outline
ODE Systems
1046,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
ODE Systems
Fundamental Ideas
Linear Homogeneous Systems with Constant Coefficients
Linear Non-Homogeneous Systems
Nonlinear Systems
ODE Systems
Fundamental Ideas
1047,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
y = f(t, y)
Solution: a vector function y = h(t)
ODE Systems
Fundamental Ideas
y = f(t, y)
Solution: a vector function y = h(t)
Autonomous system: y = f(y)
1048,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
ODE Systems
Fundamental Ideas
y = f(t, y)
Solution: a vector function y = h(t)
Autonomous system: y = f(y)
1049,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
ODE Systems
Fundamental Ideas
For a homogeneous system,
y = A(t)y
1050,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
ODE Systems
Fundamental Ideas
For a homogeneous system,
y = A(t)y
giving a basis.
General solution:
y(t) =
1051,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
n
X
i =1
ci yi (t) = [Y(t)] c
ODE Systems
1052,
Fundamental Ideas
Linear Homogeneous Systems with Constant
Coefficients
Linear Homogeneous
Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
y = Ay
ODE Systems
1053,
Fundamental Ideas
Linear Homogeneous Systems with Constant
Coefficients
Linear Homogeneous
Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
y = Ay
Non-degenerate case: matrix A non-singular
ODE Systems
1054,
Fundamental Ideas
Linear Homogeneous Systems with Constant
Coefficients
Linear Homogeneous
Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
y = Ay
Non-degenerate case: matrix A non-singular
Attempt y = xe t y = xe t .
Substitution: Axe t = xe t
ODE Systems
1055,
Fundamental Ideas
Linear Homogeneous Systems with Constant
Coefficients
Linear Homogeneous
Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
y = Ay
Non-degenerate case: matrix A non-singular
Attempt y = xe t y = xe t .
Substitution: Axe t = xe t Ax = x
ODE Systems
1056,
Fundamental Ideas
Linear Homogeneous Systems with Constant
Coefficients
Linear Homogeneous
Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
y = Ay
Non-degenerate case: matrix A non-singular
Attempt y = xe t y = xe t .
Substitution: Axe t = xe t Ax = x
If A is diagonalizable,
n linearly independent solutions yi = xi e i t corresponding to n
eigenpairs
ODE Systems
1057,
Fundamental Ideas
Linear Homogeneous Systems with Constant
Coefficients
Linear Homogeneous
Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
y = Ay
Non-degenerate case: matrix A non-singular
Attempt y = xe t y = xe t .
Substitution: Axe t = xe t Ax = x
If A is diagonalizable,
n linearly independent solutions yi = xi e i t corresponding to n
eigenpairs
If A is not diagonalizable?
ODE Systems
1058,
Fundamental Ideas
Linear Homogeneous Systems with Constant
Coefficients
Linear Homogeneous
Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
y = Ay
Non-degenerate case: matrix A non-singular
Attempt y = xe t y = xe t .
Substitution: Axe t = xe t Ax = x
If A is diagonalizable,
n linearly independent solutions yi = xi e i t corresponding to n
eigenpairs
If A is not diagonalizable?
All xi e i t together will not complete the basis.
Try y = xte t ?
ODE Systems
1059,
Fundamental Ideas
Linear Homogeneous Systems with Constant
Coefficients
Linear Homogeneous
Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
y = Ay
Non-degenerate case: matrix A non-singular
Attempt y = xe t y = xe t .
Substitution: Axe t = xe t Ax = x
If A is diagonalizable,
n linearly independent solutions yi = xi e i t corresponding to n
eigenpairs
If A is not diagonalizable?
All xi e i t together will not complete the basis.
Try y = xte t ? Substitution leads to
xe t + xte t = Axte t xe t = 0 x = 0.
Absurd!
ODE Systems
1060,
Fundamental Ideas
Linear Homogeneous Systems with Constant
Coefficients
Linear Homogeneous
Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
ODE Systems
1061,
Fundamental Ideas
Linear Homogeneous Systems with Constant
Coefficients
Linear Homogeneous
Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
ODE Systems
1062,
Fundamental Ideas
Linear Homogeneous Systems with Constant
Coefficients
Linear Homogeneous
Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
ODE Systems
1063,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
ODE Systems
n
X
ci yi (t) = [Y(t)]c
i =1
Complete solution:
1064,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
ODE Systems
n
X
ci yi (t) = [Y(t)]c
i =1
Complete solution:
1065,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
ODE Systems
1066,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
Method of diagonalization
If A is a diagonalizable constant matrix, with X1 AX = D,
changing variables to z = X1 y, such that y = Xz,
Xz = AXz+g(t) z = X1 AXz+X1 g(t) = Dz+h(t) (say).
ODE Systems
dk t
+e
dk t
1067,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
e dk t hk (t)dt.
ODE Systems
dk t
+e
dk t
1068,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
e dk t hk (t)dt.
ODE Systems
1069,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
ODE Systems
1070,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
ODE Systems
1071,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
[Y]1 gdt
Points to note
ODE Systems
Necessary Exercises: 1
1072,
Fundamental Ideas
Linear Homogeneous Systems with Constant Coeffici
Linear Non-Homogeneous Systems
Nonlinear Systems
Outline
1073,
y = Ay
1074,
y = Ay
1075,
y = Ay
1076,
Characteristic equation:
2 p + q = 0,
with p = (a11 + a22 ) = 1 + 2 and q = a11 a22 a12 a21 = 1 2
1077,
Characteristic equation:
2 p + q = 0,
with p = (a11 + a22 ) = 1 + 2 and q = a11 a22 a12 a21 = 1 2
Discriminant D = p 2 4q and
r
p 2
p
p
D
1,2 =
q =
.
2
2
2
2
Solution (for diagonalizable A):
y = c1 x1 e 1 t + c2 x2 e 2 t
1078,
Characteristic equation:
2 p + q = 0,
with p = (a11 + a22 ) = 1 + 2 and q = a11 a22 a12 a21 = 1 2
Discriminant D = p 2 4q and
r
p 2
p
p
D
1,2 =
q =
.
2
2
2
2
Solution (for diagonalizable A):
y = c1 x1 e 1 t + c2 x2 e 2 t
Solution for deficient A:
y = c1 x1 e t + c2 (tx1 + u)e t
y = c1 x1 e t + c2 (x1 + u)e t + tc2 x1 e t
1079,
y2
y1
y2
y1
y1
(b) Centre
(c) Spiral
y2
y2
y2
y1
y1
y1
1080,
1081,
Sub-type
Node
improper
proper
degenerate
Eigenvalues
real, opposite signs
pure imaginary
complex, both
non-zero components
real, same sign
unequal in magnitude
equal, diagonalizable
equal, deficient
Stability
unstable
stable
stable
if p < 0,
unstable
if p > 0
1082,
Sub-type
Node
improper
proper
degenerate
Eigenvalues
real, opposite signs
pure imaginary
complex, both
non-zero components
real, same sign
unequal in magnitude
equal, diagonalizable
equal, deficient
q
spiral
c
e
n
t
r
e
stable
node
spiral
p2
0
4q =
unstable
node
saddle point
unstable
Stability
unstable
stable
stable
if p < 0,
unstable
if p > 0
1083,
1084,
1085,
Important terms
Stability: If y0 is a critical point of the dynamic system
y = f(y) and for every > 0, > 0 such that
ky(t0 ) y0 k < ky(t) y0 k < t > t0 ,
then y0 is a stable critical point. If, further,
y(t) y0 as t , then y0 is said to be
asymptotically stable.
Positive definite function: A function V (y), with V (0) = 0, is
called positive definite if
V (y) > 0 y 6= 0.
Lyapunov function: A positive definite function V (y), having
continuous V
yi , with a negative semi-definite rate of
change
V = [V (y)]T f(y).
1086,
1087,
1088,
1089,
Points to note
1090,
Outline
1091,
1092,
X
an x n = a0 + a1 x + a2 x 2 + a3 x 3 + a4 x 4 + a5 x 5 +
y (x) =
n=0
or in powers of (x x0 ).
1093,
X
an x n = a0 + a1 x + a2 x 2 + a3 x 3 + a4 x 4 + a5 x 5 +
y (x) =
n=0
or in powers of (x x0 ).
A simple exercise:
1094,
and
4x 2 y = y .
y + P(x)y + Q(x)y = 0
If P(x) and Q(x) are analytic at a point x = x0 ,
i.e. if they possess convergent series expansions in powers
of (x x0 ) with some radius of convergence R,
then the solution is analytic at x0 , and a power series solution
y (x) = a0 + a1 (x x0 ) + a2 (x x0 )2 + a3 (x x0 )3 +
1095,
y + P(x)y + Q(x)y = 0
If P(x) and Q(x) are analytic at a point x = x0 ,
i.e. if they possess convergent series expansions in powers
of (x x0 ) with some radius of convergence R,
then the solution is analytic at x0 , and a power series solution
y (x) = a0 + a1 (x x0 ) + a2 (x x0 )2 + a3 (x x0 )3 +
X
n=0
Q(x) =
X
n=0
pn x n = p0 + p1 x + p2 x 2 + p3 x 3 + ,
qn x n = q0 + q1 x + q2 x 2 + q3 x 3 + ,
1096,
an x n .
n=0 an x
P(x)y
"
qn x n
"
pn x
n=0
Q(x)y
(n + 2)(n + 1)an+2 x n
n=0
n=0
leads to
as
1097,
X
n=0
#
X
n
X
X
n
pnk (k + 1)ak+1 x n
(n + 1)an+1 x =
n=0 k=0
n=0
X
n=0
an x n =
X
n
X
n=0 k=0
qnk ak x n
y (x) =
n=0 an x
P(x)y
"
qn x n
"
pn x
n=0
Q(x)y
X
n=0
X
n=0
"
(n + 2)(n + 1)an+2 x n
n=0
n=0
leads to
as
1098,
#
X
n
X
X
n
pnk (k + 1)ak+1 x n
(n + 1)an+1 x =
n=0 k=0
n=0
an x n =
n=0
(n + 2)(n + 1)an+2 +
n
X
k=0
X
n
X
qnk ak x n
n=0 k=0
pnk (k + 1)ak+1 +
n
X
k=0
qnk ak x n = 0
y (x) =
n=0 an x
P(x)y
"
qn x n
"
pn x
n=0
Q(x)y
X
n=0
X
n=0
(n + 2)(n + 1)an+2 x n
n=0
n=0
leads to
as
1099,
"
#
X
n
X
X
n
pnk (k + 1)ak+1 x n
(n + 1)an+1 x =
n=0 k=0
n=0
an x n =
n=0
(n + 2)(n + 1)an+2 +
n
X
k=0
Recursion formula:
an+2 =
X
n
X
qnk ak x n
n=0 k=0
pnk (k + 1)ak+1 +
n
X
qnk ak x n = 0
k=0
X
1
[(k + 1)pnk ak+1 + qnk ak ]
(n + 2)(n + 1)
k=0
Frobenius Method
1100,
Frobenius Method
1101,
b(x)
x
and Q(x) =
c(x)
,
x2
x 2 y + xb(x)y + c(x)y = 0
in which b(x) and c(x) are analytic at the origin.
Frobenius Method
Working steps:
1. Assume the solution in the form y (x) = x r
n
n=0 an x .
y (x) and y (x).
3. Substitute these series for y (x), y (x) and y (x) into the
given ODE and collect coefficients of x r , x r +1 , x r +2 etc.
4. Equate the coefficient of x r to zero to obtain an equation in
the index r , called the indicial equation as
r (r 1) + b0 r + c0 = 0;
allowing a0 to become arbitrary.
5. For each solution r , equate other coefficients to obtain a1 , a2 ,
a3 etc in terms of a0 .
Note:
1102,
1103,
Series Method
Special Functions Defined as IntegralsPower
Frobenius Method
Special Functions Defined as Integrals
Special Functions Arising as Solutions of ODEs
R
Gamma function: (n) = 0 e x x n1 dx, convergent for n > 0.
Recurrence relation (1) = 1, (n + 1) = n(n)
allows extension of the definition for the entire real
line except for zero and negative integers.
(n + 1) = n! for non-negative integers.
(A generalization of the factorial function.)
R1
Beta function: B(m, n) = 0 x m1 (1 x)n1 dx =
R /2
2 0 sin2m1 cos2n1 d; m, n > 0.
B(m, n) = B(n, m); B(m, n) = (m)(n)
(m+n)
R
2
x
Error function: erf (x) = 2 0 e t dt.
(Area under the normal or Gaussian distribution)
Rx
Sine integral function: Si (x) = 0 sint t dt.
1104,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Special Functions Arising as Solutions of ODEs
1105,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Special Functions Arising as Solutions of ODEs
1106,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Special Functions Arising as Solutions of ODEs
Airys equation
Chebyshevs equation
Hermites equation
y k 2 xy = 0
(1 x 2 )y xy + k 2 y = 0
y 2xy + 2ky = 0
Bessels equation
x 2 y + xy + (x 2 k 2 )y = 0
Gausss hypergeometric
equation
Laguerres equation
Resulting functions
Legendre functions
Legendre polynomials
Airy functions
Chebyshev polynomials
Hermite functions
Hermite polynomials
Bessel functions
Neumann functions
Hankel functions
Hypergeometric function
xy + (1 x)y + ky = 0
Laguerre polynomials
1107,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Legendres equation
x = 0 isP
an ordinary point and a power series solution
n
y (x) =
n=0 an x is convergent at least for |x| < 1.
1108,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Legendres equation
x = 0 isP
an ordinary point and a power series solution
n
y (x) =
n=0 an x is convergent at least for |x| < 1.
k(k + 1)
a0 ,
2!
(k + 2)(k 1)
=
a1
3!
(k n)(k + n + 1)
=
an
(n + 2)(n + 1)
a2 =
a3
and
an+2
for n 2.
1109,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Legendre functions
y1 (x) = 1
1110,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Legendre functions
y1 (x) = 1
1111,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Legendre functions
y1 (x) = 1
1112,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Special Functions Arising as Solutions of ODEs
Legendre functions
y1 (x) = 1
k(k 1)
ak
2(2k 1)
1113,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Legendre polynomial
,
Choosing ak = (2k1)(2k3)31
k!
(2k 1)(2k 3) 3 1
Pk (x) =
k!
k(k
1)
k(k 1)(k 2)(k 3) k4
k
k2
x
x
+
x
.
2(2k 1)
2 4(2k 1)(2k 3)
1114,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Legendre polynomial
,
Choosing ak = (2k1)(2k3)31
k!
(2k 1)(2k 3) 3 1
Pk (x) =
k!
k(k
1)
k(k 1)(k 2)(k 3) k4
k
k2
x
x
+
x
.
2(2k 1)
2 4(2k 1)(2k 3)
1115,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Special Functions Arising as Solutions of ODEs
1
P0 (x)
0.8
P1(x)
0.6
P3 (x)
0.4
P 2(x)
Pn (x)
0.2
0.2
0.4
P5 (x)
0.6
P 4(x)
0.8
1
1
0.8
0.6
0.4
0.2
0.2
0.4
0.6
0.8
1116,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Special Functions Arising as Solutions of ODEs
1
P0 (x)
0.8
P1(x)
0.6
P3 (x)
0.4
P 2(x)
Pn (x)
0.2
0.2
0.4
P5 (x)
0.6
P 4(x)
0.8
1
1
0.8
0.6
0.4
0.2
0.2
0.4
0.6
0.8
All roots of a Legendre polynomial are real and they lie in [1, 1].
1117,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Special Functions Arising as Solutions of ODEs
1
P0 (x)
0.8
P1(x)
0.6
P3 (x)
0.4
P 2(x)
Pn (x)
0.2
0.2
0.4
P5 (x)
0.6
P 4(x)
0.8
1
1
0.8
0.6
0.4
0.2
0.2
0.4
0.6
0.8
All roots of a Legendre polynomial are real and they lie in [1, 1].
Orthogonality?
1118,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Special Functions Arising as Solutions of ODEs
Bessels equation
x 2 y + xy + (x 2 k 2 )y = 0
x = 0 is a regular singular point.
1119,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Special Functions Arising as Solutions of ODEs
Bessels equation
x 2 y + xy + (x 2 k 2 )y = 0
x = 0 is a regular singular point.
Frobenius method: carrying out the early steps,
(r 2 k 2 )a0 x r +[(r +1)2 k 2 ]a1 x r +1 +
X
[an2 +{r 2 k 2 +n(n+2r )}an ]x r +n = 0
n=2
Indicial equation: r 2 k 2 = 0
r = k
1120,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Special Functions Arising as Solutions of ODEs
Bessels equation
x 2 y + xy + (x 2 k 2 )y = 0
x = 0 is a regular singular point.
Frobenius method: carrying out the early steps,
(r 2 k 2 )a0 x r +[(r +1)2 k 2 ]a1 x r +1 +
X
[an2 +{r 2 k 2 +n(n+2r )}an ]x r +n = 0
n=2
Indicial equation: r 2 k 2 = 0 r = k
With r = k, (r + 1)2 k 2 6= 0 a1 = 0 and
an =
an2
n(n + 2r )
for n 2.
1121,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Special Functions Arising as Solutions of ODEs
Bessel functions:
1
and using n = 2m,
Selecting a0 = 2k (k+1)
am =
(1)m
.
+ m + 1)
2k+2m m!(k
m x k+2m
k+2m
X
(1)
x
2
=
(1)m k+2m
2
m!(k + m + 1)
m!(k + m + 1)
m=0
m=0
1122,
Method
Special Functions Arising as SolutionsPower
ofSeries
ODEs
Frobenius
Method
Special Functions Defined as Integrals
Special Functions Arising as Solutions of ODEs
Bessel functions:
1
and using n = 2m,
Selecting a0 = 2k (k+1)
am =
(1)m
.
+ m + 1)
2k+2m m!(k
m x k+2m
k+2m
X
(1)
x
2
=
(1)m k+2m
2
m!(k + m + 1)
m!(k + m + 1)
m=0
m=0
Points to note
Legendre polynomials
Bessel functions
1123,
Outline
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1124,
Preliminary Ideas
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1125,
Preliminary Ideas
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1126,
Preliminary Ideas
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1127,
Preliminary Ideas
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1128,
Preliminary Ideas
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1129,
Preliminary Ideas
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
Question:
For what values of k (eigenvalues), does the given BVP
possess non-trivial solutions, and
what are the corresponding solutions (eigenfunctions), up to
arbitrary scalar multiples?
Analogous to the algebraic eigenvalue problem Av = v!
1130,
Preliminary Ideas
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
Question:
For what values of k (eigenvalues), does the given BVP
possess non-trivial solutions, and
what are the corresponding solutions (eigenfunctions), up to
arbitrary scalar multiples?
Analogous to the algebraic eigenvalue problem Av = v!
Analogy of a Hermitian matrix: self-adjoint differential operator.
1131,
Preliminary Ideas
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1132,
Sturm-Liouville Theory
Preliminary Ideas
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
and
G (x) = F (x)Q(x).
1133,
Sturm-Liouville Theory
Preliminary Ideas
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
and
G (x) = F (x)Q(x).
Elimination of G (x):
F (x) P(x)F (x) + [Q(x) P (x)]F (x) = 0
This is the adjoint of the original ODE.
1134,
Sturm-Liouville Theory
Preliminary Ideas
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1135,
Preliminary Ideas
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
where P1 = P and Q1 = Q P .
Then, the adjoint of F + P1 F + Q1 F = 0 is
+ P2 + Q2 = 0,
where P2 = P1 = P and
Q2 = Q1 P1 = Q P (P ) = Q.
1136,
Preliminary Ideas
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
where P1 = P and Q1 = Q P .
Then, the adjoint of F + P1 F + Q1 F = 0 is
+ P2 + Q2 = 0,
where P2 = P1 = P and
Q2 = Q1 P1 = Q P (P ) = Q.
1137,
Sturm-Liouville Theory
Preliminary Ideas
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
where P1 = P and Q1 = Q P .
Then, the adjoint of F + P1 F + Q1 F = 0 is
+ P2 + Q2 = 0,
where P2 = P1 = P and
Q2 = Q1 P1 = Q P (P ) = Q.
1138,
Preliminary Ideas
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1139,
Preliminary Ideas
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
d
Fy + (x)y ?
dx
1140,
Sturm-Liouville Theory
Preliminary Ideas
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
d2
(F )
dx 2
d
Fy + (x)y ?
dx
and
(x) = FQ.
+ FQ =
d
dx (FP).
1141,
Sturm-Liouville Theory
Preliminary Ideas
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
d2
(F )
dx 2
d
Fy + (x)y ?
dx
and
(x) = FQ.
+ FQ =
d
dx (FP).
F + 2F + F + FQ = FP + (FP)
F + (2F FP) + F (FP) + FQ = 0
1142,
Preliminary Ideas
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1143,
Preliminary Ideas
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1144,
Sturm-Liouville Problems
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
Sturm-Liouville equation
[r (x)y ] + [q(x) + p(x)]y = 0,
where p, q, r and r are continuous on [a, b], with p(x) > 0 on
[a, b] and r (x) > 0 on (a, b).
1145,
Sturm-Liouville Problems
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
Sturm-Liouville equation
[r (x)y ] + [q(x) + p(x)]y = 0,
where p, q, r and r are continuous on [a, b], with p(x) > 0 on
[a, b] and r (x) > 0 on (a, b).
With different boundary conditions,
Regular S-L problem:
a1 y (a) + a2 y (a) = 0 and b1 y (b) + b2 y (b) = 0,
vectors [a1 a2 ]T and [b1 b2 ]T being non-zero.
Periodic S-L problem: With r (a) = r (b),
y (a) = y (b) and y (a) = y (b).
Singular S-L problem: If r (a) = 0, no boundary condition is
needed at x = a. If r (b) = 0, no boundary condition
is needed at x = b.
(We just look for bounded solutions over [a, b].)
1146,
Sturm-Liouville Theory
Sturm-Liouville Problems
Orthogonality of eigenfunctions
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1147,
Sturm-Liouville Theory
Sturm-Liouville Problems
Orthogonality of eigenfunctions
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
(ryn ) + (q + n p)yn = 0
) yn
(q + m p)ym yn = (rym
(q + n p)ym yn = (ryn ) ym
1148,
Sturm-Liouville Theory
Sturm-Liouville Problems
Orthogonality of eigenfunctions
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
(ryn ) + (q + n p)yn = 0
Subtracting,
) yn
(q + m p)ym yn = (rym
(q + n p)ym yn = (ryn ) ym
) yn
)yn (rym
(rym
(m n )pym yn = (ryn ) ym + (ryn )ym
= r (ym yn yn ym ) .
1149,
Sturm-Liouville Problems
Integrating both sides,
Z b
p(x)ym (x)yn (x)dx
(m n )
Sturm-Liouville Theory
1150,
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
Sturm-Liouville Problems
Integrating both sides,
Z b
p(x)ym (x)yn (x)dx
(m n )
Sturm-Liouville Theory
1151,
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
Sturm-Liouville Problems
Integrating both sides,
Z b
p(x)ym (x)yn (x)dx
(m n )
Sturm-Liouville Theory
1152,
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
Sturm-Liouville Problems
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1153,
Sturm-Liouville Problems
Sturm-Liouville Theory
1154,
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1
1
1
Pm (x)Pn (x)dx = [(1x 2 )(Pm Pn Pn Pm
)]1 = 0
Sturm-Liouville Problems
Sturm-Liouville Theory
1155,
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1
1
1
Pm (x)Pn (x)dx = [(1x 2 )(Pm Pn Pn Pm
)]1 = 0
Sturm-Liouville Problems
Real eigenvalues
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1156,
Sturm-Liouville Problems
Real eigenvalues
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1157,
Sturm-Liouville Problems
Real eigenvalues
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
Adding together,
p(u 2 + v 2 ) = [ru ] v + [ru ]v [rv ]u [rv ] u = r (uv vu )
1158,
Sturm-Liouville Theory
Sturm-Liouville Problems
Real eigenvalues
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
Adding together,
p(u 2 + v 2 ) = [ru ] v + [ru ]v [rv ]u [rv ] u = r (uv vu )
Integration and application of boundary conditions leads to
Z b
p(x)[u 2 (x) + v 2 (x)]dx = 0.
= 0 and =
1159,
Eigenfunction Expansions
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1160,
Eigenfunction Expansions
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
m=0
1161,
Sturm-Liouville Theory
Eigenfunction Expansions
Inner product:
Z
(f , yn ) =
1162,
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
=
where
b X
a m=0
kyn k =
m=0
(yn , yn ) =
Fourier coefficients: an =
(f ,yn )
kyn k2
am (ym , yn ) = an kyn k2
p(x)yn2 (x)dx
Sturm-Liouville Theory
Eigenfunction Expansions
Inner product:
Z
(f , yn ) =
1163,
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
=
where
b X
a m=0
kyn k =
m=0
(yn , yn ) =
(f ,yn )
kyn k2
am (ym , yn ) = an kyn k2
p(x)yn2 (x)dx
Fourier coefficients: an =
Normalized eigenfunctions:
m (x) =
ym (x)
kym (x)k
Sturm-Liouville Theory
Eigenfunction Expansions
1164,
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
N
X
m=0
Error
E = kf N k2 =
"
p(x) f (x)
N
X
m=0
#2
m m (x)
dx
Sturm-Liouville Theory
Eigenfunction Expansions
1165,
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
N
X
m=0
Error
E = kf N k2 =
"
p(x) f (x)
N
X
m=0
#2
m m (x)
dx
b
a
n p(x)2n (x)dx
n = cn
best approximation in the mean or least square approximation
Sturm-Liouville Theory
Eigenfunction Expansions
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
N
X
cn (f , n )+
N
X
n=0
n=0
cn2 (n , n ) = kf k2 2
E = kf k
Bessels inequality:
N
X
n=0
cn2
kf k =
N
X
n=0
b
a
cn2 0.
p(x)f 2 (x)dx
N
X
n=0
cn2 +
N
X
n=0
cn2
1166,
Sturm-Liouville Theory
Eigenfunction Expansions
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
N
X
cn (f , n )+
N
X
n=0
n=0
cn2 (n , n ) = kf k2 2
E = kf k
Bessels inequality:
N
X
n=0
cn2
kf k =
Partial sum
sk (x) =
N
X
n=0
k
X
N
X
n=0
cn2 +
N
X
n=0
cn2 0.
p(x)f 2 (x)dx
am m (x)
m=0
cn2
1167,
Eigenfunction Expansions
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
k a
1168,
Eigenfunction Expansions
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
k a
1169,
Eigenfunction Expansions
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
k a
1170,
Points to note
Sturm-Liouville problems
Orthogonal eigenfunctions
Eigenfunction expansions
Sturm-Liouville Theory
Preliminary Ideas
Sturm-Liouville Problems
Eigenfunction Expansions
1171,
Outline
1172,
1173,
1174,
1175,
1176,
Dirichlets conditions:
If f (x) and its derivative are piecewise continuous on
[L, L] and are periodic with a period 2L, then the series
(x)
converges to the mean f (x+)+f
of one-sided limits, at
2
all points.
Fourier series
1177,
Dirichlets conditions:
If f (x) and its derivative are piecewise continuous on
[L, L] and are periodic with a period 2L, then the series
(x)
converges to the mean f (x+)+f
of one-sided limits, at
2
all points.
Fourier series
Note: The interval of integration can be [x0 , x0 + 2L] for any x0 .
1178,
Dirichlets conditions:
If f (x) and its derivative are piecewise continuous on
[L, L] and are periodic with a period 2L, then the series
(x)
converges to the mean f (x+)+f
of one-sided limits, at
2
all points.
Fourier series
Note: The interval of integration can be [x0 , x0 + 2L] for any x0 .
1179,
1180,
Parsevals identity:
a02 +
1X 2
1
(an + bn2 ) =
2
2L
n=1
f 2 (x)dx
1181,
Parsevals identity:
a02 +
1X 2
1
(an + bn2 ) =
2
2L
n=1
f 2 (x)dx
1182,
Parsevals identity:
a02 +
1X 2
1
(an + bn2 ) =
2
2L
n=1
f 2 (x)dx
a02
1
1X 2
(an + bn2 )
kf (x)k2
+
2
2L
n=1
1183,
Extensions in Application
1184,
Extensions in Application
1185,
Extensions in Application
an cos
n=1
where a0 =
1
L
RL
0
f (x)dx and an =
nx
,
L
2
L
RL
0
f (x) cos nx
L dx.
1186,
Extensions in Application
an cos
n=1
where a0 =
1
L
RL
0
f (x)dx and an =
nx
,
L
2
L
RL
0
f (x) cos nx
L dx.
1187,
Extensions in Application
1188,
Extensions in Application
f(x)
3L
2L
2L
3L
2L
3L
fs(x)
3L
2L
1189,
Extensions in Application
1190,
Half-range expansions
For Fourier cosine series of a function f (x) over [0, L], even
periodic extension:
f (x)
for 0 x L,
fc (x) =
and fc (x+2L) = fc (x)
f (x) for L x < 0,
For Fourier sine series of a function f (x) over [0, L], odd
periodic extension:
f (x)
for 0 x L,
fs (x) =
and fs (x+2L) = fs (x)
f (x) for L x < 0,
Extensions in Application
1191,
Half-range expansions
For Fourier cosine series of a function f (x) over [0, L], even
periodic extension:
f (x)
for 0 x L,
fc (x) =
and fc (x+2L) = fc (x)
f (x) for L x < 0,
For Fourier sine series of a function f (x) over [0, L], odd
periodic extension:
f (x)
for 0 x L,
fs (x) =
and fs (x+2L) = fs (x)
f (x) for L x < 0,
Extensions in Application
1192,
Half-range expansions
For Fourier cosine series of a function f (x) over [0, L], even
periodic extension:
f (x)
for 0 x L,
fc (x) =
and fc (x+2L) = fc (x)
f (x) for L x < 0,
For Fourier sine series of a function f (x) over [0, L], odd
periodic extension:
f (x)
for 0 x L,
fs (x) =
and fs (x+2L) = fs (x)
f (x) for L x < 0,
Fourier Integrals
1193,
Fourier Integrals
X
(an cos pn x + bn sin pn x),
fL (x) = a0 +
n=1
where pn =
n
L
1194,
Fourier Integrals
1195,
X
(an cos pn x + bn sin pn x),
fL (x) = a0 +
n=1
where pn =
n
L
L
L
n=1
where p = pn+1 pn = L .
Fourier Integrals
1196,
Fourier Integrals
1197,
Fourier Integrals
1198,
Fourier Integrals
Using cos =
e i +e i
2
With substitution p = q,
Z Z
Z
ip(xv )
f (v )e
dv dp =
0
f (v )e iq(xv ) dv dq.
1199,
Fourier Integrals
Using cos =
e i +e i
2
With substitution p = q,
Z Z
Z
ip(xv )
f (v )e
dv dp =
0
f (v )e iq(xv ) dv dq.
1200,
Points to note
1201,
Outline
Fourier Transforms
Definition and Fundamental Properties
Important Results on Fourier Transforms
Discrete Fourier Transform
Fourier Transforms
Definition and Fundamental Properties
Important Results on Fourier Transforms
Discrete Fourier Transform
1202,
Fourier Transforms
f (t) =
f (v )e
dv e iwt dw
2
2
Composition of an infinite number of functions in the
e iwt
, over a continuous distribution of frequency w .
form
2
1203,
Fourier Transforms
f (t) =
f (v )e
dv e iwt dw
2
2
Composition of an infinite number of functions in the
e iwt
, over a continuous distribution of frequency w .
form
2
Fourier transform: Amplitude of a frequency component:
Z
1
f (t)e iwt dt
F(f ) f (w ) =
2
Function of the frequency variable.
1204,
Fourier Transforms
f (t) =
f (v )e
dv e iwt dw
2
2
Composition of an infinite number of functions in the
e iwt
, over a continuous distribution of frequency w .
form
2
Fourier transform: Amplitude of a frequency component:
Z
1
f (t)e iwt dt
F(f ) f (w ) =
2
Function of the frequency variable.
Inverse Fourier transform
F
1
(f) f (t) =
2
f(w )e iwt dw
1205,
Fourier Transforms
1206,
Fourier Transforms
F(1) = 2(w )
1207,
Fourier Transforms
F(1) = 2(w )
Linearity of Fourier transforms:
F{f1 (t) + f2 (t)} = f1 (w ) + f2 (w )
1208,
Fourier Transforms
F(1) = 2(w )
Linearity of Fourier transforms:
F{f1 (t) + f2 (t)} = f1 (w ) + f2 (w )
Scaling:
F{f (at)} =
1 w
f
|a|
a
n w o
and F 1 f
= |a|f (at)
a
1209,
Fourier Transforms
F(1) = 2(w )
Linearity of Fourier transforms:
F{f1 (t) + f2 (t)} = f1 (w ) + f2 (w )
Scaling:
F{f (at)} =
Shifting rules:
1 w
f
|a|
a
n w o
and F 1 f
= |a|f (at)
a
1210,
Fourier Transforms
f (t)e iwt dt
F{f (t)} =
2
Z
1
1
iwt
(iw )f (t)e iwt dt
=
f (t)e
2
2
= iw f(w ).
1211,
Fourier Transforms
f (t)e iwt dt
F{f (t)} =
2
Z
1
1
iwt
(iw )f (t)e iwt dt
=
f (t)e
2
2
= iw f(w ).
Alternatively, differentiating the inverse Fourier transform,
Z
d
1
d
iwt
[f (t)] =
f (w )e dw
dt
dt
2
Z
i
h
1
f (w )e iwt dw = F 1 {iw f(w )}.
=
2 t
1212,
Fourier Transforms
1213,
Fourier Transforms
Z
f ( )d
1
f (w ).
iw
1214,
Fourier Transforms
Z
f ( )d
1
f (w ).
iw
1215,
Fourier Transforms
f ( )g (t )d
1216,
Fourier Transforms
f ( )g (t )d
) = F{h(t)}
h(w
Z Z
1
=
f ( )g (t )e iwt d dt
2
Z
Z
1
iw
iw (t )
f ( )e
=
g (t )e
dt d
2
Z
Z
1
iwt
iw
g (t )e
dt d
f ( )e
=
2
1217,
Fourier Transforms
f ( )g (t )d
) = F{h(t)}
h(w
Z Z
1
=
f ( )g (t )e iwt d dt
2
Z
Z
1
iw
iw (t )
f ( )e
=
g (t )e
dt d
2
Z
Z
1
iwt
iw
g (t )e
dt d
f ( )e
=
2
) = 2 f(w )
g (w )
h(w
1218,
Fourier Transforms
Z
Z
1
iwt
f (t)
=
g (w )e dw dt
2
Z
f (t)g (t)dt.
=
1219,
Fourier Transforms
Z
Z
1
iwt
f (t)
=
g (w )e dw dt
2
Z
f (t)g (t)dt.
=
1220,
Fourier Transforms
Definition and Fundamental Properties
Important Results on Fourier Transforms
Discrete Fourier Transform
1221,
Fourier Transforms
1
0
for a t b
otherwise
1222,
Fourier Transforms
1
0
for a t b
otherwise
1223,
Fourier Transforms
1
0
for a t b
otherwise
1224,
Fourier Transforms
Definition and Fundamental Properties
Important Results on Fourier Transforms
Discrete Fourier Transform
1225,
Fourier Transforms
1226,
Fourier Transforms
1227,
Fourier Transforms
1228,
Fourier Transforms
1229,
Fourier Transforms
Definition and Fundamental Properties
Important Results on Fourier Transforms
Discrete Fourier Transform
where mj = e iwj
h i
f(w) = mjk f(t),
2
h i
and mjk is an N N matrix.
1230,
Fourier Transforms
Definition and Fundamental Properties
Important Results on Fourier Transforms
Discrete Fourier Transform
where mj = e iwj
h i
f(w) = mjk f(t),
2
h i
and mjk is an N N matrix.
1231,
Fourier Transforms
Definition and Fundamental Properties
Important Results on Fourier Transforms
Discrete Fourier Transform
where mj = e iwj
h i
f(w) = mjk f(t),
2
h i
and mjk is an N N matrix.
1232,
Fourier Transforms
Definition and Fundamental Properties
Important Results on Fourier Transforms
Discrete Fourier Transform
where mj = e iwj
h i
f(w) = mjk f(t),
2
h i
and mjk is an N N matrix.
1233,
Fourier Transforms
Definition and Fundamental Properties
Important Results on Fourier Transforms
Discrete Fourier Transform
where mj = e iwj
h i
f(w) = mjk f(t),
2
h i
and mjk is an N N matrix.
Cooley-Tuckey algorithm:
1234,
Fourier Transforms
Definition and Fundamental Properties
Important Results on Fourier Transforms
Discrete Fourier Transform
where mj = e iwj
h i
f(w) = mjk f(t),
2
h i
and mjk is an N N matrix.
Cooley-Tuckey algorithm:
fast Fourier transform (FFT)
1235,
Fourier Transforms
Definition and Fundamental Properties
Important Results on Fourier Transforms
Discrete Fourier Transform
1236,
Fourier Transforms
Definition and Fundamental Properties
Important Results on Fourier Transforms
Discrete Fourier Transform
1237,
Fourier Transforms
Definition and Fundamental Properties
Important Results on Fourier Transforms
Discrete Fourier Transform
1238,
Points to note
Fourier Transforms
Definition and Fundamental Properties
Important Results on Fourier Transforms
Discrete Fourier Transform
1239,
Outline
Minimax Approximation*
Approximation with Chebyshev polynomials
Minimax Polynomial Approximation
Minimax Approximation*
Approximation with Chebyshev polynomials
Minimax Polynomial Approximation
1240,
Minimax Approximation*
Chebyshev polynomials:
Polynomial solutions of the singular Sturm-Liouville problem
(1 x 2 )y xy + n2 y = 0
or
hp
i
n2
1 x2 y +
y =0
1 x2
1241,
Minimax Approximation*
Chebyshev polynomials:
Polynomial solutions of the singular Sturm-Liouville problem
(1 x 2 )y xy + n2 y = 0
or
hp
i
n2
1 x2 y +
y =0
1 x2
1242,
Minimax Approximation*
Immediate observations
Coefficients in a Chebyshev polynomial are integers. In
particular, the leading coefficient of Tn (x) is 2n1 .
For even n, Tn (x) is an even function, while for odd n it is an
odd function.
Tn (1) = 1, Tn (1) = (1)n and |Tn (x)| 1 for 1 x 1.
Zeros of a Chebyshev polynomial Tn (x) are real and lie inside
the interval [1, 1] at locations x = cos (2k1)
for
2n
k = 1, 2, 3, , n.
These locations are also called Chebyshev accuracy points.
Further, zeros of Tn (x) are interlaced by those of Tn+1 (x).
Extrema of Tn (x) are of magnitude equal to unity, alternate in
sign and occur at x = cos k
n for k = 0, 1, 2, 3, , n.
Orthogonality and norms:
Z 1
if m 6= n,
0
Tm (x)Tn (x)
if
m = n 6= 0, and
dx =
2
1 x2
1
if m = n = 0.
1243,
Minimax Approximation*
1244,
1
P 8 (x)
T8 (x)
0.8
0.8
0.6
0.4
0.4
0.2
0.2
T3 (x)
0.6
0.2
0.2
0.4
0.4
0.6
0.6
0.8
extrema
zeroes
1.5
0.5
0
x
0.5
0.8
1.5
1
1
0.8
0.6
0.4
0.2
0
x
0.2
0.4
0.6
0.8
Figure: Extrema and zeros of T3 (x) Figure: Contrast: P8 (x) and T8 (x)
Minimax Approximation*
1245,
1
P 8 (x)
T8 (x)
0.8
0.8
0.6
0.4
0.4
0.2
0.2
T3 (x)
0.6
0.2
0.2
0.4
0.4
0.6
0.6
0.8
extrema
zeroes
1.5
0.5
0
x
0.5
0.8
1.5
1
1
0.8
0.6
0.4
0.2
0
x
0.2
0.4
0.6
0.8
Figure: Extrema and zeros of T3 (x) Figure: Contrast: P8 (x) and T8 (x)
Minimax Approximation*
1246,
1
P 8 (x)
T8 (x)
0.8
0.8
0.6
0.4
0.4
0.2
0.2
T3 (x)
0.6
0.2
0.2
0.4
0.4
0.6
0.6
0.8
extrema
zeroes
1.5
0.5
0.5
0.8
1.5
1
1
0.8
0.6
0.4
0.2
0
x
0.2
0.4
0.6
0.8
Figure: Extrema and zeros of T3 (x) Figure: Contrast: P8 (x) and T8 (x)
Minimax Approximation*
Minimax property
Theorem: Among all polynomials pn (x) of degree n > 0
with the leading coefficient equal to unity, 21n Tn (x)
deviates least from zero in [1, 1]. That is,
max |pn (x)| max |21n Tn (x)| = 21n .
1x1
1x1
1247,
Minimax Approximation*
Minimax property
Theorem: Among all polynomials pn (x) of degree n > 0
with the leading coefficient equal to unity, 21n Tn (x)
deviates least from zero in [1, 1]. That is,
max |pn (x)| max |21n Tn (x)| = 21n .
1x1
1x1
1x1
1248,
Minimax Approximation*
Minimax property
Theorem: Among all polynomials pn (x) of degree n > 0
with the leading coefficient equal to unity, 21n Tn (x)
deviates least from zero in [1, 1]. That is,
max |pn (x)| max |21n Tn (x)| = 21n .
1x1
1x1
1x1
1249,
Minimax Approximation*
1250,
Chebyshev series
f (x) = a0 T0 (x) + a1 T1 (x) + a2 T2 (x) + a3 T3 (x) +
with coefficients
Z
Z
2 1 f (x)Tn (x)
1 1 f (x)T0 (x)
dx and an =
dx for n = 1, 2, 3,
a0 =
1
1
1 x2
1 x2
Minimax Approximation*
1251,
Chebyshev series
f (x) = a0 T0 (x) + a1 T1 (x) + a2 T2 (x) + a3 T3 (x) +
with coefficients
Z
Z
2 1 f (x)Tn (x)
1 1 f (x)T0 (x)
dx and an =
dx for n = 1, 2, 3,
a0 =
1
1
1 x2
1 x2
A truncated series
Pn
k=0 ak Tk (x):
Chebyshev economization
Leading error term an+1 Tn+1 (x) deviates least from zero over
[1, 1] and is qualitatively similar to the error function.
Minimax Approximation*
1252,
Chebyshev series
f (x) = a0 T0 (x) + a1 T1 (x) + a2 T2 (x) + a3 T3 (x) +
with coefficients
Z
Z
2 1 f (x)Tn (x)
1 1 f (x)T0 (x)
dx and an =
dx for n = 1, 2, 3,
a0 =
1
1
1 x2
1 x2
A truncated series
Pn
k=0 ak Tk (x):
Chebyshev economization
Leading error term an+1 Tn+1 (x) deviates least from zero over
[1, 1] and is qualitatively similar to the error function.
Question: How to develop a Chebyshev series approximation?
Find out so many Chebyshev polynomials and evaluate coefficients?
Minimax Approximation*
1253,
Minimax Approximation*
Assuming an+1 Tn+1 (x) to be the error, at the zeros of Tn+1 (x),
the error will be officially zero, i.e.
n
X
k=0
1254,
Minimax Approximation*
Assuming an+1 Tn+1 (x) to be the error, at the zeros of Tn+1 (x),
the error will be officially zero, i.e.
n
X
k=0
1255,
Minimax Approximation*
Approximation with Chebyshev polynomials
Minimax Polynomial Approximation
1256,
Minimax Approximation*
Approximation with Chebyshev polynomials
Minimax Polynomial Approximation
1257,
Minimax Approximation*
Utilize any gap to reduce the deviation at the other extrema with
values at the bound.
y
d
f(x) p(x)
p(x)
/2
O
a
/2
1258,
Minimax Approximation*
Utilize any gap to reduce the deviation at the other extrema with
values at the bound.
y
d
f(x) p(x)
p(x)
/2
O
a
/2
1259,
Points to note
Minimax Approximation*
Approximation with Chebyshev polynomials
Minimax Polynomial Approximation
1260,
Outline
1261,
Introduction
Quasi-linear second order PDEs
a
2u
2u
2u
+
c
+
2b
= F (x, y , u, ux , uy )
x 2
xy
y 2
1262,
Introduction
Quasi-linear second order PDEs
a
2u
2u
2u
+
c
+
2b
= F (x, y , u, ux , uy )
x 2
xy
y 2
1263,
Introduction
Quasi-linear second order PDEs
a
2u
2u
2u
+
c
+
2b
= F (x, y , u, ux , uy )
x 2
xy
y 2
1264,
Introduction
1265,
Introduction
1266,
Introduction
1267,
Introduction
1268,
Introduction
Method of separation of variables
For u(x, y ), propose a solution in the form
u(x, y ) = X (x)Y (y )
and substitute
ux = X Y , uy = XY , uxx = X Y , uxy = X Y , uyy = XY
to cast the equation into the form
(x, X , X , X ) = (y , Y , Y , Y ).
1269,
Introduction
Method of separation of variables
For u(x, y ), propose a solution in the form
u(x, y ) = X (x)Y (y )
and substitute
ux = X Y , uy = XY , uxx = X Y , uxy = X Y , uyy = XY
to cast the equation into the form
(x, X , X , X ) = (y , Y , Y , Y ).
If the manoeuvre succeeds then, x and y being independent
variables, it implies
(x, X , X , X ) = (y , Y , Y , Y ) = k.
1270,
Introduction
Method of separation of variables
For u(x, y ), propose a solution in the form
u(x, y ) = X (x)Y (y )
and substitute
ux = X Y , uy = XY , uxx = X Y , uxy = X Y , uyy = XY
to cast the equation into the form
(x, X , X , X ) = (y , Y , Y , Y ).
If the manoeuvre succeeds then, x and y being independent
variables, it implies
(x, X , X , X ) = (y , Y , Y , Y ) = k.
Nature of the separation constant k is decided based on the
context, resulting ODEs are solved in consistency with the
boundary conditions and assembled to construct u(x, y ).
1271,
Hyperbolic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
Q
u
Q
P
1272,
Hyperbolic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
Q
u
Q
P
2u
t 2
1273,
Hyperbolic Equations
Under the assumptions, denoting c 2 =
2u
x 2 = c 2
t
"
T
,
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
#
u
u
.
x Q
x P
1274,
Hyperbolic Equations
Under the assumptions, denoting c 2 =
2u
x 2 = c 2
t
"
T
,
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
#
u
u
.
x Q
x P
1275,
Hyperbolic Equations
Solution by separation of variables
1276,
Hyperbolic Equations
Solution by separation of variables
n
L
1277,
Hyperbolic Equations
Corresponding solution:
1278,
Hyperbolic Equations
Corresponding solution:
nx
L
1279,
Hyperbolic Equations
Corresponding solution:
nx
L
X
n=1
nx
.
L
1280,
Hyperbolic Equations
Corresponding solution:
nx
L
n=1
nx
.
L
1281,
Hyperbolic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
X
n=1
ut (x, 0) = g (x) =
X
n=1
An sin
nx
L
n Bn sin
nx
L
1282,
Hyperbolic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
X
n=1
ut (x, 0) = g (x) =
An sin
nx
L
nx
L
n Bn sin
n=1
Hence, coefficients:
Z
nx
2 L
dx
f (x) sin
An =
L 0
L
2
and Bn =
cn
g (x) sin
nx
dx
L
1283,
Hyperbolic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
X
n=1
ut (x, 0) = g (x) =
X
n=1
An sin
nx
L
n Bn sin
nx
L
Hence, coefficients:
Z
Z L
2
nx
nx
2 L
dx and Bn =
dx
f (x) sin
g (x) sin
An =
L 0
L
cn 0
L
Related problems:
Different boundary conditions: other kinds of series
Long wire: infinite domain, continuous frequencies and
solution from Fourier integrals
Alternative: Reduce the problem using Fourier transforms.
General wave equation in 3-d: utt = c 2 2 u
Membrane equation: utt = c 2 (uxx + uyy )
1284,
Hyperbolic Equations
1285,
Hyperbolic Equations
1286,
Hyperbolic Equations
For a hyperbolic equation in the form
a
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
2u
2u
2u
+
c
+
2b
= F (x, y , u, ux , uy ),
x 2
xy
y 2
b 2 ac
,
a
1287,
Hyperbolic Equations
For a hyperbolic equation in the form
a
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
2u
2u
2u
+
c
+
2b
= F (x, y , u, ux , uy ),
x 2
xy
y 2
b 2 ac
,
a
1
1
( ).
with x = ( + ), t =
2
2c
1288,
Hyperbolic Equations
Substitution of derivatives
1289,
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
ux = U x + U x = U + U uxx = U + 2U + U
ut = U t + U t = cU + cU utt = c 2 U 2c 2 U + c 2 U
into the PDE utt = c 2 uxx gives
c 2 (U 2U + U ) = c 2 (U + 2U + U ).
Canonical form: U = 0
Hyperbolic Equations
1290,
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
Substitution of derivatives
ux = U x + U x = U + U uxx = U + 2U + U
ut = U t + U t = cU + cU utt = c 2 U 2c 2 U + c 2 U
into the PDE utt = c 2 uxx gives
c 2 (U 2U + U ) = c 2 (U + 2U + U ).
Canonical form: U = 0
Integration:
U =
U(, ) =
U d + () = ()
()d + f2 () = f1 () + f2 ()
Hyperbolic Equations
1291,
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
Substitution of derivatives
ux = U x + U x = U + U uxx = U + 2U + U
ut = U t + U t = cU + cU utt = c 2 U 2c 2 U + c 2 U
into the PDE utt = c 2 uxx gives
c 2 (U 2U + U ) = c 2 (U + 2U + U ).
Canonical form: U = 0
Integration:
U =
U(, ) =
U d + () = ()
()d + f2 () = f1 () + f2 ()
Hyperbolic Equations
1292,
Hyperbolic Equations
1293,
Hyperbolic Equations
cos n t sin
=
=
cn
L ,
i
1 h n
n
sin
(x ct) + sin
(x + ct)
2
L
L
i
n
1h
n
cos
(x ct) cos
(x + ct)
2
L
L
1294,
Parabolic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
1295,
Parabolic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
X
T
=
= p 2 ,
c 2T
X
1296,
Parabolic Equations
1297,
Parabolic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
Tn (t) = An e n t .
By superposition,
u(x, t) =
An sin
n=1
nx 2n t
e
,
L
An sin
n=1
nx
,
L
1298,
Parabolic Equations
1299,
Parabolic Equations
x
uss
(x) = 0, uss (0) = u1 , uss (L) = u2 uss (x) = u1 +
L
1300,
Parabolic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
x
uss
(x) = 0, uss (0) = u1 , uss (L) = u2 uss (x) = u1 +
L
Substituting into the BVP,
Ut = c 2 Uxx , U(0, t) = U(L, t) = 0, U(x, 0) = f (x) uss (x).
Final solution:
u(x, t) =
X
n=1
Bn sin
nx 2n t
e
+ uss (x),
L
1301,
Parabolic Equations
Heat conduction in an infinite wire
1302,
Parabolic Equations
Heat conduction in an infinite wire
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
Initial condition
u(x, 0) = f (x) =
1303,
Parabolic Equations
Solution using Fourier transforms
1304,
Parabolic Equations
Solution using Fourier transforms
u
= c 2 w 2 u,
F(ut ) = c 2 (iw )2 F(u)
t
since variables x and t are independent.
Initial value problem in u(w , t):
u
= c 2 w 2 u, u(0) = f(w )
t
2 2
Solution: u(w , t) = f(w )e c w t
1305,
Parabolic Equations
Solution using Fourier transforms
u
= c 2 w 2 u,
F(ut ) = c 2 (iw )2 F(u)
t
since variables x and t are independent.
Initial value problem in u(w , t):
u
= c 2 w 2 u, u(0) = f(w )
t
2 2
Solution: u(w , t) = f(w )e c w t
Inverse Fourier transform gives solution of the original problem as
Z
1
2 2
1
u(x, t) = F {
u (w , t)} =
f(w )e c w t e iwx dw
2
Z
Z
1
2 2
cos(wx wv )e c w t dw dv .
f (v )
u(x, t) =
0
1306,
Elliptic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
2u 2u
+
=0
x 2 y 2
Laplaces equation
1307,
Elliptic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
2u 2u
+
=0
x 2 y 2
Laplaces equation
Steady-state heat flow in a rectangular plate:
uxx + uyy = 0, u(0, y ) = u(a, y ) = u(x, 0) = 0, u(x, b) = f (x);
a Dirichlet problem over the domain 0 x a, 0 y b.
1308,
Elliptic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
2u 2u
+
=0
x 2 y 2
Laplaces equation
Steady-state heat flow in a rectangular plate:
uxx + uyy = 0, u(0, y ) = u(a, y ) = u(x, 0) = 0, u(x, b) = f (x);
a Dirichlet problem over the domain 0 x a, 0 y b.
Proposal u(x, y ) = X (x)Y (y ) leads to
X Y + XY = 0
Y
X
=
= p 2 .
X
Y
1309,
Elliptic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
2u 2u
+
=0
x 2 y 2
Laplaces equation
Steady-state heat flow in a rectangular plate:
uxx + uyy = 0, u(0, y ) = u(a, y ) = u(x, 0) = 0, u(x, b) = f (x);
a Dirichlet problem over the domain 0 x a, 0 y b.
Proposal u(x, y ) = X (x)Y (y ) leads to
X Y + XY = 0
Y
X
=
= p 2 .
X
Y
Separated ODEs:
X + p 2 X = 0 and Y p 2 Y = 0
1310,
Elliptic Equations
X (x) = sin
ny
ny
+ Bn sinh
a
a
1311,
Elliptic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
nx
Two-Dimensional
n Wave Equation
X (x) = sin
ny
ny
+ Bn sinh
a
a
nx
ny
sinh
a
a
X
n=1
Bn sin
nx
ny
sinh
a
a
1312,
Elliptic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
nx
Two-Dimensional
n Wave Equation
X (x) = sin
ny
ny
+ Bn sinh
a
a
nx
ny
sinh
a
a
X
n=1
Bn sin
nx
ny
sinh
a
a
1313,
Elliptic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
nx
Two-Dimensional
n Wave Equation
X (x) = sin
ny
ny
+ Bn sinh
a
a
nx
ny
sinh
a
a
X
n=1
Bn sin
nx
ny
sinh
a
a
1314,
Elliptic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
1315,
Elliptic Equations
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
1316,
1317,
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
Fxx + Fyy
T
= 2 = 2
F
c T
Helmholtz equation:
Fxx + Fyy + 2 F = 0
1318,
Y + 2 Y
X
=
= 2
X
Y
X + 2 X = 0 and Y + 2 Y = 0,
p
such that = 2 + 2 .
1319,
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
Y + 2 Y
X
=
= 2
X
Y
X + 2 X = 0 and Y + 2 Y = 0,
p
such that = 2 + 2 .
With BCs X (0) = X (a) = 0 and Y (0) = Y (b) = 0,
Xm (x) = sin
mx
a
and
Yn (y ) = sin
ny
.
b
m 2 n 2
+
a
b
with solutions of T + c 2 2 T = 0 as
1320,
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
X
X
mx
ny
[Amn cos cmn t+Bmn sin cmn t] sin
u(x, y , t) =
sin
,
a
b
m=1 n=1
X
mx
ny
Amn sin
f (x, y ) =
sin
a
b
and
g (x, y ) =
m=1 n=1
X
X
m=1 n=1
ny
mx
sin
.
a
b
1321,
Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
X
X
mx
ny
[Amn cos cmn t+Bmn sin cmn t] sin
u(x, y , t) =
sin
,
a
b
m=1 n=1
X
mx
ny
Amn sin
f (x, y ) =
sin
a
b
and
g (x, y ) =
m=1 n=1
X
X
m=1 n=1
ny
mx
sin
.
a
b
1322,
Points to note
Separation of variables
Examples of boundary value problems with hyperbolic,
parabolic and elliptic equations
1323,
Outline
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1324,
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1325,
Analytic Functions
1326,
Analytic Functions
1327,
Analytic Functions
1328,
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
zz0
f (z0 + z) f (z0 )
f (z) f (z0 )
= lim
z0
z z0
z
1329,
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
zz0
f (z0 + z) f (z0 )
f (z) f (z0 )
= lim
z0
z z0
z
1330,
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
zz0
f (z0 + z) f (z0 )
f (z) f (z0 )
= lim
z0
z z0
z
1331,
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
zz0
f (z0 + z) f (z0 )
f (z) f (z0 )
= lim
z0
z z0
z
1332,
Analytic Functions
Cauchy-Riemann conditions
If f (z) = u(x, y ) + iv (x, y ) is analytic then
u + i v
f (z) = lim
x,y 0 x + i y
along all paths of approach for z = x + i y 0 or x, y 0.
y
z = iy
2
z0
z0
4
5
z = x
1333,
Analytic Functions
Cauchy-Riemann conditions
If f (z) = u(x, y ) + iv (x, y ) is analytic then
u + i v
f (z) = lim
x,y 0 x + i y
along all paths of approach for z = x + i y 0 or x, y 0.
y
z = iy
2
z0
z0
4
5
z = x
1334,
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1335,
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1336,
Analytic Functions
1337,
Analytic Functions
= v
x ,
u
u
u
(x2 , y2 )
(x1 , y1 )
= (x + i y ) (x1 , y1 ) + i y
x
x
x
v
v
v
+ i (x + i y ) (x1 , y1 ) + i x
(x2 , y2 )
(x1 , y1 )
x
x
x
u
v
=
(x1 , y1 ) + i (x1 , y1 ) +
x
x
v
y u
u
x v
(x2 , y2 )
(x1 , y1 ) + i
(x2 , y2 )
(x1 , y1 )
i
z x
x
z x
x
f
z
v
y
u
x
and
u
y
1338,
Analytic Functions
= v
x ,
u
u
u
(x2 , y2 )
(x1 , y1 )
f = (x + i y ) (x1 , y1 ) + i y
x
x
x
v
v
v
+ i (x + i y ) (x1 , y1 ) + i x
(x2 , y2 )
(x1 , y1 )
x
x
x
f
u
v
=
(x1 , y1 ) + i (x1 , y1 ) +
z
x
x
v
y u
u
x v
(x2 , y2 )
(x1 , y1 ) + i
(x2 , y2 )
(x1 , y1 )
i
z x
x
z x
x
y
Since x
z , z 1, as z 0, the limit exists and
Using C-R conditions
f (z) =
u
x
1339,
and
u
y
v
u v
u
+i
= i
+
.
x
x
y
y
Analytic Functions
v
y
u
x
and
u
y
= v
x ,
2u
2u
2u
2v
2v
2v
2v
2u
,
,
=
=
=
=
,
x 2
xy y 2
y x y x
y 2 xy
x 2
2v
2v
2u 2u
+
=
0
=
+
.
x 2 y 2
x 2 y 2
1340,
Analytic Functions
v
y
u
x
and
u
y
= v
x ,
2u
2u
2u
2v
2v
2v
2v
2u
,
,
=
=
=
=
,
x 2
xy y 2
y x y x
y 2 xy
x 2
2v
2v
2u 2u
+
=
0
=
+
.
x 2 y 2
x 2 y 2
1341,
Conformal Mapping
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1342,
Conformal Mapping
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1343,
Analytic Functions
Conformal Mapping
3
C
1.5
2.5
2
1
1.5
0.5
0.5
0
O A
0
O
0.5
0.5
1
1
0.5
0.5
x
1.5
1
1
0.5
0.5
1.5
2.5
3.5
1344,
Conformal Mapping
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1345,
Conformal Mapping
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
implying
|w (0)| = |f (z0 )| |z(0)|
1346,
Conformal Mapping
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1347,
Conformal Mapping
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1348,
Conformal Mapping
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1349,
Potential Theory
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1350,
Potential Theory
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1351,
Potential Theory
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1352,
Potential Theory
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1353,
Potential Theory
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1354,
Points to note
Analytic Functions
Analyticity of Complex Functions
Conformal Mapping
Potential Theory
1355,
Outline
1356,
Line Integral
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
1357,
Line Integral
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
6= 0,
Z b
Z
f [z(t)]z(t)dt.
f (z)dz =
C
1358,
Line Integral
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
6= 0,
Z b
Z
f [z(t)]z(t)dt.
f (z)dz =
a
C
H
Over a simple closed curve, contour integral: C f (z)dz
1359,
Line Integral
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
6= 0,
Z b
Z
f [z(t)]z(t)dt.
f (z)dz =
a
C
H
Over a simple
curve, contour integral: C f (z)dz
H closed
Example: C z n dz for integer n, around circle z = e i
I
Z 2
0 for n 6= 1,
n
n+1
i (n+1)
z dz = i
e
d =
2i
for n = 1.
C
0
1360,
Line Integral
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
6= 0,
Z b
Z
f [z(t)]z(t)dt.
f (z)dz =
a
C
H
Over a simple
curve, contour integral: C f (z)dz
H closed
Example: C z n dz for integer n, around circle z = e i
I
Z 2
0 for n 6= 1,
n
n+1
i (n+1)
z dz = i
e
d =
2i
for n = 1.
C
0
1361,
1362,
dxdy +i
dxdy ,
x
y
x
y
R
C
R
1363,
dxdy +i
dxdy ,
x
y
x
y
R
C
R
1364,
1365,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
C2
z1
C1
C2
1366,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
C2
z1
C1
For an analytic
R z function f (z) in a simply connected
domain D, z12 f (z)dz is independent of the path and
depends only on the end-points, as long as the path is
completely contained in D.
C2
1367,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
C2
z1
C1
For an analytic
R z function f (z) in a simply connected
domain D, z12 f (z)dz is independent of the path and
depends only on the end-points, as long as the path is
completely contained in D.
Consequence: Definition of the function
Z z
F (z) =
f ()d
z0
C2
Indefinite integral
Question: Is F (z) analytic? Is F (z) = f (z)?
1368,
1369,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
Indefinite integral
Question: Is F (z) analytic? Is F (z) = f (z)?
F (z + z) F (z)
f (z) =
z
1
z
1
z
Z
Z
z+z
z0
z+z
f ()d
z
z0
f ()d f (z)
[f () f (z)]d
1370,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
Indefinite integral
Question: Is F (z) analytic? Is F (z) = f (z)?
F (z + z) F (z)
f (z) =
z
1
z
1
z
Z
Z
z+z
z0
z+z
f ()d
z
z0
f ()d f (z)
[f () f (z)]d
1371,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
Indefinite integral
Question: Is F (z) analytic? Is F (z) = f (z)?
F (z + z) F (z)
f (z) =
z
1
z
1
z
Z
Z
z+z
z0
z+z
f ()d
z
z0
f ()d f (z)
[f () f (z)]d
f (z) <
d = .
z
z
z
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
z2
s1
D
C1
C2
f (z)dz =
C1
f (z)dz =
C2
f (z)dz
z1
s3
C3
C3
s2
1372,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
z2
s1
D
C1
C2
f (z)dz =
C1
f (z)dz =
C2
f (z)dz
z1
s3
C3
C3
s2
1373,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
C
C1
C2
L2
C3
L3
f (z)dz
C1
f (z)dz
C2
f (z)dz
C3
f (z)dz = 0.
1374,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
C
C1
C2
L2
C3
L3
f (z)dz
C1
f (z)dz
C2
f (z)dz
f (z)dz = 0.
C3
f (z)dz =
n I
X
i =1
f (z)dz.
Ci
1375,
1376,
1377,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
f (z)
dz = f (z0 )
z z0
dz
+
z z0
f (z) f (z0 )
dz
z z0
1378,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
f (z)
dz = f (z0 )
z z0
dz
+
z z0
f (z) f (z0 )
dz
z z0
with < .
1379,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
f (z)
dz = f (z0 )
z z0
dz
+
z z0
f (z) f (z0 )
dz
z z0
with < .
1380,
Direct applications
Evaluation of contour integral:
1381,
1382,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
Subtracting,
Re i
re i
f (Re )
+ i
2if (re ) = i
d
Re i re i
Re
re i
0
Z 2
(R 2 r 2 )f (Re i )
= i
d
(Re i re i )(Re i re i )
0
Z 2
1
(R 2 r 2 )f (Re i )
i
f (re ) =
d.
2 0 R 2 2Rr cos( ) + r 2
i
1383,
1384,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
I
f (z)
1
dz,
2i C z z0
I
1
f (z)
dz,
2i C (z z0 )2
I
f (z)
2!
dz,
2i C (z z0 )3
,
I
f (z)
n!
dz.
2i C (z z0 )n+1
1385,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
I
f (z)
1
dz,
2i C z z0
I
1
f (z)
dz,
2i C (z z0 )2
I
f (z)
2!
dz,
2i C (z z0 )3
,
I
f (z)
n!
dz.
2i C (z z0 )n+1
1386,
1387,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
I
1
1
1
=
f (z)
dz
2i z C
z z0 z
z z0
I
1
f (z)dz
=
2i C (z z0 z)(z z0 )
I
I
1
f (z)dz
1
1
1
+
f (z)
dz
=
2i C (z z0 )2 2i C
(z z0 z)(z z0 ) (z z0 )2
I
I
f (z)dz
f (z)dz
1
1
+
z
=
2
2
2i C (z z0 )
2i
C (z z0 z)(z z0 )
f (z0 + z) f (z0 )
z
1388,
Line Integral
Cauchys Integral Theorem
Cauchys Integral Formula
I
1
1
1
=
f (z)
dz
2i z C
z z0 z
z z0
I
1
f (z)dz
=
2i C (z z0 z)(z z0 )
I
I
1
f (z)dz
1
1
1
+
f (z)
dz
=
2i C (z z0 )2 2i C
(z z0 z)(z z0 ) (z z0 )2
I
I
f (z)dz
f (z)dz
1
1
+
z
=
2
2
2i C (z z0 )
2i
C (z z0 z)(z z0 )
f (z0 + z) f (z0 )
z
Points to note
Consequences of analyticity
1389,
Outline
1390,
X
n=0
with coefficients
1
1
an = f (n) (z0 ) =
n!
2i
f (w )dw
,
(w z0 )n+1
1391,
X
n=0
with coefficients
1
1
an = f (n) (z0 ) =
n!
2i
f (w )dw
,
(w z0 )n+1
1392,
X
n=0
with coefficients
1
1
an = f (n) (z0 ) =
n!
2i
f (w )dw
,
(w z0 )n+1
1393,
X
n=0
with coefficients
1
1
an = f (n) (z0 ) =
n!
2i
f (w )dw
,
(w z0 )n+1
1394,
1395,
n=
an (z z0 )n =
m=0
bm (z z0 )m +
m=1
cm
;
(z z0 )m
with coefficients
I
f (w )dw
1
;
2i C (w z0 )n+1
I
I
f (w )dw
1
1
, cm =
f (w )(w z0 )m1 dw ;
bm =
2i C (w z0 )m+1
2i C
an =
or,
1396,
n=
an (z z0 )n =
m=0
bm (z z0 )m +
m=1
cm
;
(z z0 )m
with coefficients
I
f (w )dw
1
;
2i C (w z0 )n+1
I
I
f (w )dw
1
1
, cm =
f (w )(w z0 )m1 dw ;
bm =
2i C (w z0 )m+1
2i C
an =
or,
1397,
n=
an (z z0 )n =
m=0
bm (z z0 )m +
m=1
cm
;
(z z0 )m
with coefficients
I
f (w )dw
1
;
2i C (w z0 )n+1
I
I
f (w )dw
1
1
, cm =
f (w )(w z0 )m1 dw ;
bm =
2i C (w z0 )m+1
2i C
an =
or,
.
f (z) =
2i C1 w z
2i C2 w z
1398,
.
f (z) =
2i C1 w z
2i C2 w z
Organization of the series:
1
w z
1
w z
1
(w z0 )[1 (z z0 )/(w z0 )]
1
=
(z z0 )[1 (w z0 )/(z z0 )]
=
z
w
z0
C2
C1
1399,
1400,
.
f (z) =
2i C1 w z
2i C2 w z
Organization of the series:
1
w z
1
w z
1
(w z0 )[1 (z z0 )/(w z0 )]
1
=
(z z0 )[1 (w z0 )/(z z0 )]
=
z
w
z0
C2
C1
= 1+q+q 2 + +q n1 +
.
1+q+q 2 + +q n1 =
1q
1q
1q
1401,
.
f (z) =
2i C1 w z
2i C2 w z
Organization of the series:
1
w z
1
w z
1
(w z0 )[1 (z z0 )/(w z0 )]
1
=
(z z0 )[1 (w z0 )/(z z0 )]
z
w
z0
C2
C1
= 1+q+q 2 + +q n1 +
.
1+q+q 2 + +q n1 =
1q
1q
1q
We use q =
zz0
w z0
w z0
zz0
over C2 .
z z0 n 1
1
z z0
(z z0 )n1
1
=
+
+
+
+
w z
w z0 (w z0 )2
(w z0 )n
w z0
w z
I
1
f (w )dw
= a0 + a1 (z z0 ) + + an1 (z z0 )n1 + Tn ,
2i C1 w z
with coefficients as required and
I
1
z z0 n f (w )
Tn =
dw .
2i C1 w z0
w z
1402,
z z0 n 1
1
z z0
(z z0 )n1
1
=
+
+
+
+
w z
w z0 (w z0 )2
(w z0 )n
w z0
w z
I
1
f (w )dw
= a0 + a1 (z z0 ) + + an1 (z z0 )n1 + Tn ,
2i C1 w z
with coefficients as required and
I
1
z z0 n f (w )
Tn =
dw .
2i C1 w z0
w z
z0
,
Similarly, with q = wzz
0
I
f (w )dw
1
= a1 (z z0 )1 + + an (z z0 )n + Tn ,
2i C2 w z
1403,
n1
X
k=n
where
ak (z z0 )k + Tn + Tn ,
Tn =
1
2i
and Tn =
1
2i
C1
C2
z z0
w z0
w z0
z z0
n
n
f (w )
dw
w z
f (w )
dw .
z w
1404,
n1
X
k=n
where
ak (z z0 )k + Tn + Tn ,
Tn =
1
2i
and Tn =
1
2i
C1
C2
z z0
w z0
w z0
z z0
n
n
f (w ) is
bounded
zz0
z0
w z0 < 1 over C1 and wzz
< 1 over C2
0
f (w )
dw
w z
f (w )
dw .
z w
1405,
n1
X
k=n
where
ak (z z0 )k + Tn + Tn ,
Tn =
1
2i
and Tn =
1
2i
C1
C2
z z0
w z0
w z0
z z0
n
n
f (w ) is
bounded
zz0
z0
w z0 < 1 over C1 and wzz
< 1 over C2
0
f (w )
dw
w z
f (w )
dw .
z w
1406,
1407,
1408,
1409,
1410,
1411,
Residues
f (z)dz = 2ia1
1412,
Residues
f (z)dz = 2ia1
1413,
Residues
n=m
f (z)dz = 2ia1
an (z z0 )m+n
X
(m + n)!
d m1
m
an (z z0 )n+1
[(z
z
)
f
(z)]
=
0
m1
dz
(n + 1)!
n=1
1414,
Residues
n=m
f (z)dz = 2ia1
an (z z0 )m+n
X
(m + n)!
d m1
m
an (z z0 )n+1
[(z
z
)
f
(z)]
=
0
m1
dz
(n + 1)!
n=1
Resf (z)
z0
= a1 =
d m1
1
lim
[(z z0 )m f (z)].
(m 1)! zz0 dz m1
1415,
Residues
n=m
f (z)dz = 2ia1
an (z z0 )m+n
X
(m + n)!
d m1
m
an (z z0 )n+1
[(z
z
)
f
(z)]
=
0
m1
dz
(n + 1)!
n=1
d m1
1
lim
[(z z0 )m f (z)].
(m 1)! zz0 dz m1
Residue theorem: If f (z) is analytic inside and on simple closed
curve C , with singularities at z1 , z2 , z3 , , zk inside C ; then
I
k
X
Resf (z).
f (z)dz = 2i
z
Resf (z)
z0
= a1 =
i =1
1416,
General strategy
Identify the required integral as a contour integral of a
complex function, or a part thereof.
If the domain of integration is infinite, then extend the
contour infinitely, without enclosing new singularities.
1417,
General strategy
Identify the required integral as a contour integral of a
complex function, or a part thereof.
If the domain of integration is infinite, then extend the
contour infinitely, without enclosing new singularities.
Example:
I =
(cos , sin )d
0
I =
f (z)dz,
=
z+
,
z
2
z
2i
z
iz
C
C
where C is the unit circle centred at the origin.
Denoting poles falling inside the unit circle C as pj ,
X
Resf (z).
I = 2i
pj
j
1418,
1419,
iR
p
p
1420,
f (z)dz =
f (x)dx +
f (z)dz
iR
S
R
p
p
1421,
f (z)dz =
f (x)dx +
f (z)dz
iR
S
R
I =
f (x)dx = 2i
Resf (z)
pj
p
p
as R .
1422,
f (x) sin sx dx
1423,
Consider
I = A(s) + iB(s) =
f (x) sin sx dx
As
|e isz |
|e isx | |e sy |
|e sy |
=
1 for y 0, we have
Z
f (z)e isz dz < M R = M ,
R2
R
S
which yields, as R ,
I = 2i
X
j
Res[f
pj
(z)e isz ].
1424,
Points to note
Residue theorem
1425,
Outline
Variational Calculus*
Introduction
Eulers Equation
Direct Methods
Variational Calculus*
Introduction
Eulers Equation
Direct Methods
1426,
Variational Calculus*
Introduction
Introduction
Eulers Equation
Direct Methods
ti
1427,
Variational Calculus*
Introduction
Introduction
Eulers Equation
Direct Methods
ti
1428,
Variational Calculus*
Introduction
Introduction
Eulers Equation
Direct Methods
ti
1429,
Introduction
Variational Calculus*
Introduction
Eulers Equation
Direct Methods
1430,
Introduction
Variational Calculus*
Introduction
Eulers Equation
Direct Methods
1431,
Introduction
Variational Calculus*
Introduction
Eulers Equation
Direct Methods
1432,
Variational Calculus*
Introduction
Introduction
Eulers Equation
Direct Methods
t1
1433,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
1434,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
1435,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
(y )dx =
y dx =
y
y dx
y
x1 dx y
x1 y dx
x1 y
x1
With y (x1 ) = y (x2 ) = 0, the first term vanishes identically, and
Z x2
f
d f
I =
y dx.
y
dx y
x1
1436,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
f
y
= 0.
1437,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
f
y
= 0.
1438,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
f
y
= 0.
x2
x1
f
f
f
f
(n)
y + y + y + + (n) y
dx
y
y
y
y
Working rule: Starting from the last term, integrate one term at
a time by parts, using consistency of variations and BCs.
1439,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
f
y
= 0.
x2
x1
f
f
f
f
(n)
y + y + y + + (n) y
dx
y
y
y
y
Working rule: Starting from the last term, integrate one term at
a time by parts, using consistency of variations and BCs.
Eulers equation:
n f
d f
d 2 f
f
n d
+
(1)
= 0,
y
dx y dx 2 y
dx n y (n)
an ODE of order 2n, in general.
1440,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
1441,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
f
r
rdt +
r
rdt
=
r
r
r
t1 dt
t1
t1
Z t2
d f T
f
rdt.
=
r
dt r
t1
1442,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
f
r
rdt +
r
rdt
=
r
r
r
t1 dt
t1
t1
Z t2
d f T
f
rdt.
=
r
dt r
t1
Eulers equation: a system of second order ODEs
d f
f
=0
dt r
r
or
d f
f
= 0 for each i .
dt ri
ri
1443,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
1444,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
Eulers equation:
f
x ux
f
y uy
f
u
=0
1445,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
Eulers equation:
f
x ux
f
y uy
f
u
=0
Moving boundaries
Revision of the basic case: allowing non-zero y (x1 ), y (x2 )
1446,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
Eulers equation:
f
x ux
f
y uy
f
u
=0
Moving boundaries
Revision of the basic case: allowing non-zero y (x1 ), y (x2 )
f
At an end-point, y
y has to vanish for arbitrary y (x).
f
y
1447,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
Eulers equation:
f
x ux
f
y uy
f
u
=0
Moving boundaries
Revision of the basic case: allowing non-zero y (x1 ), y (x2 )
f
At an end-point, y
y has to vanish for arbitrary y (x).
f
y
1448,
Variational Calculus*
Eulers Equation
Introduction
Eulers Equation
Direct Methods
Eulers equation:
f
x ux
f
y uy
f
u
=0
Moving boundaries
Revision of the basic case: allowing non-zero y (x1 ), y (x2 )
f
At an end-point, y
y has to vanish for arbitrary y (x).
f
y
1449,
Variational Calculus*
Direct Methods
Introduction
Eulers Equation
Direct Methods
y +y
N
X
f (
xi , yi , yi )h,
i =1
y y
where xi = i 2 i 1 , yi = i 2 i 1 and yi = i h i 1 .
Minimize (y1 , y2 , y3 , , yN1 ) with respect to yi ;
= 0 for all i .
for example, by solving y
i
1450,
Variational Calculus*
Direct Methods
Introduction
Eulers Equation
Direct Methods
y +y
N
X
f (
xi , yi , yi )h,
i =1
y y
where xi = i 2 i 1 , yi = i 2 i 1 and yi = i h i 1 .
Minimize (y1 , y2 , y3 , , yN1 ) with respect to yi ;
= 0 for all i .
for example, by solving y
i
yi
1451,
Variational Calculus*
Direct Methods
Introduction
Eulers Equation
Direct Methods
Rayleigh-Ritz method
In terms of a set of basis functions, express the solution as
y (x) =
N
X
i wi (x).
i =1
1452,
Variational Calculus*
Direct Methods
Introduction
Eulers Equation
Direct Methods
Rayleigh-Ritz method
In terms of a set of basis functions, express the solution as
y (x) =
N
X
i wi (x).
i =1
1453,
Variational Calculus*
Direct Methods
Introduction
Eulers Equation
Direct Methods
Rayleigh-Ritz method
In terms of a set of basis functions, express the solution as
y (x) =
N
X
i wi (x).
i =1
1454,
Variational Calculus*
Direct Methods
Introduction
Eulers Equation
Direct Methods
Rayleigh-Ritz method
In terms of a set of basis functions, express the solution as
y (x) =
N
X
i wi (x).
i =1
1455,
Variational Calculus*
Direct Methods
Introduction
Eulers Equation
Direct Methods
i wi (x) dx,
i wi (x),
f x,
I [y (x)] () =
a
i =1
i =1
1456,
Variational Calculus*
Direct Methods
Introduction
Eulers Equation
Direct Methods
i wi (x) dx,
i wi (x),
f x,
I [y (x)] () =
a
Z b
a
2
4
f
y
@x,
N
X
i =1
i wi ,
N
X
i =1
i =1
i =1
i wi A wi (x) +
f
y
@x,
N
X
i =1
i wi ,
N
X
i =1
i wi A wi (x)5 dx.
1457,
Variational Calculus*
Direct Methods
Introduction
Eulers Equation
Direct Methods
i wi (x) dx,
i wi (x),
f x,
I [y (x)] () =
a
Z b
a
2
4
f
y
@x,
N
X
i =1
i wi ,
N
X
i =1
i =1
i =1
i wi A wi (x) +
f
y
@x,
N
X
i =1
i wi ,
N
X
i =1
i wi A wi (x)5 dx.
i wi wi (x)dx,
R
=
i
a
i =1
f
y
where R[y ]
variational problem.
d f
dx y
1458,
Variational Calculus*
Direct Methods
Introduction
Eulers Equation
Direct Methods
i wi (x) dx,
i wi (x),
f x,
I [y (x)] () =
a
Z b
a
2
4
f
y
@x,
N
X
i =1
i wi ,
N
X
i =1
i =1
i =1
i wi A wi (x) +
f
y
@x,
N
X
i =1
i wi ,
N
X
i =1
i wi A wi (x)5 dx.
i wi wi (x)dx,
R
=
i
a
i =1
f
y
d f
dx y
where R[y ]
1459,
Direct Methods
Variational Calculus*
Introduction
Eulers Equation
Direct Methods
Galerkin method
Question: What if we cannot find a corresponding variational
problem for the differential equation?
Answer: Work with the residual directly and demand
Z b
R[z(x)]wi (x)dx = 0.
a
1460,
Variational Calculus*
Direct Methods
Introduction
Eulers Equation
Direct Methods
Galerkin method
Question: What if we cannot find a corresponding variational
problem for the differential equation?
Answer: Work with the residual directly and demand
Z b
R[z(x)]wi (x)dx = 0.
a
Z b
X
j j (x) wi (x)dx = 0
R
a
1461,
Variational Calculus*
Direct Methods
Introduction
Eulers Equation
Direct Methods
Galerkin method
Question: What if we cannot find a corresponding variational
problem for the differential equation?
Answer: Work with the residual directly and demand
Z b
R[z(x)]wi (x)dx = 0.
a
Z b
X
j j (x) wi (x)dx = 0
R
a
1462,
Variational Calculus*
Direct Methods
Introduction
Eulers Equation
Direct Methods
Galerkin method
Question: What if we cannot find a corresponding variational
problem for the differential equation?
Answer: Work with the residual directly and demand
Z b
R[z(x)]wi (x)dx = 0.
a
Z b
X
j j (x) wi (x)dx = 0
R
a
1463,
Direct Methods
Variational Calculus*
Introduction
Eulers Equation
Direct Methods
1464,
Direct Methods
Variational Calculus*
Introduction
Eulers Equation
Direct Methods
1465,
Points to note
Variational Calculus*
Introduction
Eulers Equation
Direct Methods
Concept of a functional
Eulers equation
1466,
Outline
Epilogue
Epilogue
1467,
Epilogue
Source for further information:
http://home.iitk.ac.in/ dasgupta/MathBook
Destination for feedback:
dasgupta@iitk.ac.in
Epilogue
1468,
Epilogue
Source for further information:
http://home.iitk.ac.in/ dasgupta/MathBook
Destination for feedback:
dasgupta@iitk.ac.in
Some general courses in immediate continuation
Scientific Computing
Optimization
Epilogue
1469,
Epilogue
Approximation Theory
Geometric Modelling
Computational Geometry
Computer Graphics
Signal Processing
Image Processing
Epilogue
1470,
Outline
Selected References
Selected References
1471,
Selected References
Selected References I
F. S. Acton.
Numerical Methods that usually Work.
The Mathematical Association of America (1990).
C. M. Bender and S. A. Orszag.
Advanced Mathematical Methods for Scientists and Engineers.
Springer-Verlag (1999).
G. Birkhoff and G.-C. Rota.
Ordinary Differential Equations.
John Wiley and Sons (1989).
G. H. Golub and C. F. Van Loan.
Matrix Computations.
The John Hopkins University Press (1983).
1472,
Selected References II
M. T. Heath.
Scientific Computing .
Tata McGraw-Hill Co. Ltd (2000).
E. Kreyszig.
Advanced Engineering Mathematics.
John Wiley and Sons (2002).
E. V. Krishnamurthy and S. K. Sen.
Numerical Algorithms.
Affiliated East-West Press Pvt Ltd (1986).
D. G. Luenberger.
Linear and Nonlinear Programming .
Addison-Wesley (1984).
P. V. ONeil.
Advanced Engineering Mathematics.
Thomson Books (2004).
Selected References
1473,
Selected References
1474,