You are on page 1of 24

A Generic FDTD Form for Maxwell

Equations
David Ge (dge893@gmail.com)

October 21, 2020

Abstract

A generic FDTD form for the Maxwell equations is derived via the proving of time advancement
theorems and curls cascade theorem. Other FDTD algorithms, i.e. Yee algorithm and its varieties, are
special cases of this generic form by specifying data sampling schemes and time space estimation
orders. The maximum time advancement estimation orders are determined by the maximum number of
space curl estimations. The maximum space curl estimation orders are determined by the numbers of
the sampling points.

Contents
Introduction .................................................................................................................................................. 2
Notations....................................................................................................................................................... 2
Time Advancement ....................................................................................................................................... 3
Time Advancement Theorems .................................................................................................................. 3
Time Advancement Estimations ............................................................................................................... 5
Space Curls .................................................................................................................................................... 6
Curls Cascade Theorem............................................................................................................................. 7
Space Derivative and Curl Estimators ....................................................................................................... 7
Generic Forms of FDTD ............................................................................................................................... 10
Deriving of Special Algorithms .................................................................................................................... 11
The Time-Space Synchronized FDTD....................................................................................................... 11
Time-shifted Space-synchronized FDTD ................................................................................................. 11
The Yee algorithm ................................................................................................................................... 11
Conclusion ................................................................................................................................................... 13
References .................................................................................................................................................. 13
Appendixes.................................................................................................................................................. 14
Appendix A. Proof of Time Space Lemma .............................................................................................. 14
Appendix B. Proof of Time Advancement Theorem H ........................................................................... 17
Appendix C. Proof of Time Advancement Theorem E ........................................................................... 18
Appendix D. Proof of Curls Cascade Theorem. ...................................................................................... 19
Introduction
In trying to understand the essences of FDTD algorithms, I came up with this generic form of FDTD.
Other FDTD algorithms, i.e. Yee algorithm [1] and its varieties, Time-Space-Synchronized FDTD [2], are
special cases of this generic form by specifying data sampling schemes and time space estimation
orders.

The generic form is for estimations based purely on Taylor’s series. They do not cover those algorithms
involving techniques other than the Taylor’s series, i.e. Manry et al [3].

I believe this generic form not only reveals the essences of various FDTD algorithms, but it can also be
used in theoretical researches, i.e. error analysis, comparing of algorithms. It can also be of practical use
too. For example, it predicts that high time estimation order cannot be achieved via multiple sets of
historical data; it has to be achieved via the number of space samplings. It can be applied to irregular
geometry with high estimation orders both in time and space.

This paper does not deal with the boundary conditions. A FDTD algorithm can be independent of the
boundary problems, at least for regions far away from the boundary and for a short time. I remind the
readers of this fact because this paper does not deal with a well-posed problem. For a reader lack of
software engineering background, he/she might immediately jump to a conclusion that my work is
incorrect as soon as he/she sees that I am dealing with a not well-posed problem.

Notations
Consider the following Maxwell equations.

∇ ∙ 𝐸 = 𝜌/𝜀 (1)
∇∙𝐻 =0 (2)
𝜕𝐸 1 1
= ∇ × 𝐻 − 𝐽(𝑡) (3)
𝜕𝑡 𝜀 𝜀
𝜕𝐻 1
=− ∇×𝐸 (4)
𝜕𝑡 𝜇

Where 𝑡 is time, 𝜌, 𝜀 𝑎𝑛𝑑 𝜇 are time-invariants, 𝐽(𝑡) is a known 3D vector time function, 𝐸 𝑎𝑛𝑑 𝐻 are 3D
vectors in Cartesian coordinates (𝑥, 𝑦, 𝑧), representing an electric field and a magnetic field,
respectively, as

𝐸 (𝑥, 𝑦, 𝑧, 𝑡) 𝐻 (𝑥, 𝑦, 𝑧, 𝑡)
𝐸(𝑥, 𝑦, 𝑧, 𝑡) = 𝐸 (𝑥, 𝑦, 𝑧, 𝑡) , 𝐻(𝑥, 𝑦, 𝑧, 𝑡) = 𝐻 (𝑥, 𝑦, 𝑧, 𝑡) (5)
𝐸 (𝑥, 𝑦, 𝑧, 𝑡) 𝐻 (𝑥, 𝑦, 𝑧, 𝑡)

For a 3D vector 𝑉, its cascade curls is denoted as following.


𝜕𝑉 𝜕𝑉
⎡ − ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥
⎢ 𝜕𝑉 𝜕𝑉 ⎥ {
∇{ } × 𝑉 ≡ 𝑉, ∇ × 𝑉 ≡ ∇{ } × 𝑉 ≡ ⎢ − ,∇ }
× 𝑉 ≡ ∇ × ∇ × … × ∇ × 𝑉, 𝑘 > 0 (6)
𝜕𝑧 𝜕𝑥 ⎥
⎢ ⎥
⎢𝜕𝑉 − 𝜕𝑉 ⎥
⎣ 𝜕𝑥 𝜕𝑦 ⎦
Based on the above definitions, we have

∇{ } × ∇{ } × 𝑉 = ∇{ } × 𝑉, 𝑘, ℎ ≥ 0 (7)
In this paper, I assume that the order of temporal derivative and space curls is exchangeable
on 𝐸(𝑡), 𝐻(𝑡) 𝑎𝑛𝑑 𝐽(𝑡) , such that

𝜕 ∇{ }
× 𝐸(𝑡) 𝜕𝐸(𝑡)
= ∇{ }
×
𝜕𝑡 𝜕𝑡

𝜕 ∇{ } × 𝐻(𝑡) 𝜕𝐻(𝑡)
= ∇{ }
×
𝜕𝑡 𝜕𝑡

𝜕 ∇{ }
× 𝐽(𝑡) 𝜕𝐽(𝑡)
= ∇{ }
×
𝜕𝑡 𝜕𝑡

Time Advancement
Time Advancement Theorems
Time Advancement Theorem H. Given a set of field data 𝐸(𝑡 ), 𝐻, and 𝐻 may have one or more sets of
data in different times as classified in 3 cases identified by an integer 𝑞 in following way

𝐻(𝑡 ), 𝑞 = 0
𝐻(𝑡 − ∆ ), 𝑞 = 1, ∆ = ∆
(8)
𝐻(𝑡 ), 𝐻 𝑡 − ∆ , 𝑞 > 0, 𝑞 = 1,2, … , 𝑞
∆ ,∆ ,∆ ,…,∆ > 0
𝐻(𝑡 + ∆ ) can be expressed by

𝐶𝑎𝑠𝑒 1: 𝐻(𝑡 ), 𝑞 = 0 →
∆ 1 (−1) [ ]
𝐻(𝑡 + ∆ ) = 𝐻(𝑡 ) + ∇{ }
× 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜇 (𝜀𝜇)
( )
∆ (−1) ( )} [ ( )]
+ ∇{ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2(𝑘 + 1))! (𝜀𝜇) (9)
𝐶𝑎𝑠𝑒 2: 𝐻(𝑡 − ∆ ), 𝑞 = 1, ∆ = ∆ →
2∆ 1 (−1) [ ]
𝐻(𝑡 + ∆ ) = 𝐻(𝑡 − ∆ ) + ∇{ }
× 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜇 (𝜀𝜇)

𝐶𝑎𝑠𝑒 3: 𝐻(𝑡 ), 𝐻 𝑡 − ∆ , 𝑞 > 0, 𝑞 = 1,2, … , 𝑞 →


𝐻(𝑡 + ∆ ) = 𝐻 𝑡 −∆ + (1 − 𝑞 )𝐻(𝑡 )

∆ +∑ ∆ 1 (−1) [ ]
+ ∇{ }
× 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜇 (𝜀𝜇)
( ) ( )
∆ −∑ ∆ (−1) ( )} [ ( )]
+ ∇{ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2(𝑘 + 1))! (𝜀𝜇)

Where

0⃗, 𝑘 = 0
[ ]
𝐽 (𝑡 ) = (−1) { } 𝑑 ( )
𝐽(𝑡) (10)
∇ × ( )
,𝑘 > 0
(𝜀𝜇) 𝑑𝑡
( )
[ ( )] (−1) { 𝑑 𝐽(𝑡)
𝐽 (𝑡 ) = ∇ }
× ( )
,𝑘 ≥ 0 (11)
(𝜀𝜇) 𝑑𝑡
Note that Case 1 and Case 2 are two special cases of Case 3. We can see that for Case 1, the
𝑞ℎ { ( )}
term ∑𝑞=1 𝐻 𝑡ℎ − ∆𝑡𝑞 disappears from Case 3; for Case 2, the term for ∇ × 𝐻 disappears from Case
3. To avoid confusions I list them separately.

Time Advancement Theorem E. Given a set of field data 𝐸, 𝐻(𝑡 ), and 𝐸 may have one or more sets of
data in different times as classified in 3 cases identified by an integer 𝑞 in following way
𝐸(𝑡 ), 𝑞 = 0
𝐸(𝑡 − ∆ ), 𝑞 = 1, ∆ = ∆
(12)
𝐸(𝑡 ), 𝐸 𝑡 − ∆ , 𝑞 > 0, 𝑞 = 1,2, … , 𝑞
∆ ,∆ ,∆ ,…,∆ > 0
𝐸(𝑡 + ∆ ) can be expressed by

Case 1: 𝐸(𝑡 ), 𝑞 = 0 →
( )
∆ (−1) { } [ ]
𝐸(𝑡 + ∆ ) = 𝐸(𝑡 ) + ∇ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜀(𝜀𝜇)
( )
∆ (−1) [ ( )]
+ ∇{ ( )}
× 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2(𝑘 + 1))! (𝜀𝜇)
𝐶𝑎𝑠𝑒 2: 𝐸(𝑡 − ∆ ), 𝑞 = 1, ∆ = ∆ →
( )
2∆ (−1) { } [ ]
𝐸(𝑡 + ∆ ) = 𝐸(𝑡 − ∆ ) + ∇ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜀(𝜀𝜇)
(13)
𝐸(𝑡 ), 𝐸 𝑡 − ∆ ,𝑞 > 0 →
𝐸(𝑡 + ∆ ) = 𝐸 𝑡 −∆ + (1 − 𝑞 )𝐸(𝑡 )

( )
∆ +∑ ∆ (−1) { } [ ]
+ ∇ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜀(𝜀𝜇)
( ) ( )
∆ −∑ ∆ (−1) [ ( )]
+ ∇{ ( )}
× 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2(𝑘 + 1))! (𝜀𝜇)

Where

( )
[ ] 1 (−1) 𝑑 𝐽(𝑡)
𝐽 (𝑡 ) = ∇{ }
× ( )
,𝑘 ≥ 0 (14)
𝜀 (𝜀𝜇) 𝑑𝑡
( )
[ ( )] 1 (−1) 𝑑 𝐽(𝑡)
𝐽 (𝑡 ) = ∇{ }
× ( )
,𝑘 ≥ 0 (15)
𝜀 (𝜀𝜇) 𝑑𝑡
Note that Case 1 and Case 2 are two special cases of Case 3. To avoid confusions I list them separately.

To prove these two theorems, we need following lemma.

Time Space Lemma. For the Maxwell equations given by (1) – (4), we have

0⃗, 𝑘 = 0
𝜕 𝐻 1 (−1)
= ∇{ }
×𝐸+ (−1) { } 𝑑 ( )
𝐽 (16)
𝜕𝑡 𝜇 (𝜀𝜇) ∇ × ,𝑘 > 0
(𝜀𝜇) 𝑑𝑡 ( )

𝜕 ( )
𝐻 (−1) (−1) { 𝑑 ( )
𝐽
= ∇{ ( )}
×𝐻+ ∇ }
× (17)
𝜕𝑡 ( ) (𝜀𝜇) (𝜀𝜇) 𝑑𝑡 ( )

𝜕 𝐸 1 (−1) { 1 (−1) 𝑑 ( )
𝐽
= ∇ }
×𝐻+ ∇ { }
× (18)
𝜕𝑡 𝜀 (𝜀𝜇) 𝜀 (𝜀𝜇) 𝑑𝑡 ( )

𝜕 ( )
𝐸 (−1) 1 (−1) 𝑑 ( )
𝐽
= ∇{ ( )}
×𝐸+ ∇{ }
× (19)
𝜕𝑡 ( ) (𝜀𝜇) 𝜀 (𝜀𝜇) 𝑑𝑡 ( )

𝑘 = 0,1,2, …

For proof of this Lemma, see Appendix A.

With the above lemma, we may prove the time advancement theorems.

For a proof of the Time Advancement Theorem H, see Appendix B.

For a proof of the Time Advancement Theorem E, see Appendix C.

Time Advancement Estimations


With the above results, we can get generic forms of time advancement estimations.

From (9), we get time advancement estimations for H by cutting off the summation at 𝑘 ≥ 0:

𝐶𝑎𝑠𝑒 1: 𝐻(𝑡 ), 𝑞 = 0 →
∆ 1 (−1) [ ]
𝐻(𝑡 + ∆ ) ≈ 𝐻(𝑡 ) + ∇{ }
× 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜇 (𝜀𝜇)
( )
∆ (−1) ( )} [ ( )]
+ ∇{ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2(𝑘 + 1))! (𝜀𝜇) (44)
𝐶𝑎𝑠𝑒 2: 𝐻(𝑡 − ∆ ), 𝑞 = 1, ∆ = ∆ →
2∆ 1 (−1) [ ]
𝐻(𝑡 + ∆ ) ≈ 𝐻(𝑡 − ∆ ) + ∇{ }
× 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜇 (𝜀𝜇)

𝐶𝑎𝑠𝑒 3: 𝐻(𝑡 ), 𝐻 𝑡 − ∆ , 𝑞 > 0, 𝑞 = 1,2, … , 𝑞 →


𝐻(𝑡 + ∆ ) ≈ 𝐻 𝑡 −∆ + (1 − 𝑞 )𝐻(𝑡 )

∆ +∑ ∆ 1 (−1) [ ]
+ ∇{ }
× 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜇 (𝜀𝜇)
( ) ( )
∆ −∑ ∆ (−1) ( )} [ ( )]
+ ∇{ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2(𝑘 + 1))! (𝜀𝜇)

From (13), we get time advancement estimations for E by cutting off the summation at 𝑘 ≥ 0:

Case 1: 𝐸(𝑡 ), 𝑞 = 0 →
( )
∆ (−1) { } [ ]
𝐸(𝑡 + ∆ ) = 𝐸(𝑡 ) + ∇ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜀(𝜀𝜇)
( )
∆ (−1) [ ( )]
+ ∇{ ( )}
× 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2(𝑘 + 1))! (𝜀𝜇)
𝐶𝑎𝑠𝑒 2: 𝐸(𝑡 − ∆ ), 𝑞 = 1, ∆ = ∆ →
( )
2∆ (−1) { } [ ]
𝐸(𝑡 + ∆ ) = 𝐸(𝑡 − ∆ ) + ∇ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜀(𝜀𝜇)
(45)
𝐸(𝑡 ), 𝐸 𝑡 − ∆ ,𝑞 > 0 →
𝐸(𝑡 + ∆ ) = 𝐸 𝑡 −∆ + (1 − 𝑞 )𝐸(𝑡 )

( )
∆ +∑ ∆ (−1) { } [ ]
+ ∇ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜀(𝜀𝜇)
( ) ( )
∆ −∑ ∆ (−1) [ ( )]
+ ∇{ ( )}
× 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2(𝑘 + 1))! (𝜀𝜇)

Note that 𝑘 in (44) and (45) do not have to be of the same value.

Some observations on (44) and (45):

 Estimation order for time advancement is 2(𝑘 + 1) and is determined by the number of
cascaded space curls used.
( ) ( )
 Because the estimation errors are 𝑂 ∆ +∑ ∆ 𝑎𝑛𝑑 𝑂 ∆ −
( )
∑ ∆ , more sets of historical data do not necessarily increase estimation accuracy.
Most likely it might decrease estimation accuracy.

In the next section, we will give generic forms of curl estimations.

Space Curls
For clearly expressing space curl estimations, write space derivatives as

𝜕 𝑉
𝐷 (𝑉 ) ≡ , 𝑤, 𝑢 𝑐𝑎𝑛 𝑏𝑒 𝑥, 𝑦, 𝑧; 𝑘 > 0; 𝑉 𝑖𝑠 𝑎 3𝐷 𝑣𝑒𝑐𝑡𝑜𝑟 (46)
𝜕𝑢
For 𝑘 = 1, the notation can be simplified as

𝜕 𝑉 𝜕𝑉
𝐷 (𝑉 ) ≡ 𝐷 (𝑉 ) ≡ ≡
𝜕𝑢 𝜕𝑢
For convenience, list some vector identities below.

∇ × (∇(∇ ∙ 𝑉)) = 0⃗ (47)


∇{ }
× 𝑉 = ∇ × ∇ ×= ∇(∇ ∙ 𝑉) − ∆𝑉 (48)
I’ll first prove following theorem.

Curls Cascade Theorem


Curls Cascade Theorem. For a 3D vector 𝑉, its curls have following relationships

𝐷 (𝑉 ) − 𝐷 𝑉
∇ { }
×𝑉 =𝑃 { }
+ (−1) (𝑉 ) − 𝐷
𝐷 (𝑉 ) (49)
𝐷
𝑉 −𝐷 (𝑉 )
𝜕 𝜕
⎡ 𝐷 (𝑉 ) − 𝐷 𝑉 ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥
⎢𝜕 𝜕 ⎥
𝑃 { }
=∇×𝑃 { }
+ (−1) ⎢ 𝐷 (𝑉 ) − 𝐷 (𝑉 ) ⎥ (50)
⎢ 𝜕𝑧 𝜕𝑥 ⎥
⎢ 𝜕 𝜕
𝐷 𝑉 − 𝐷 (𝑉 ) ⎥
⎣𝜕𝑥 𝜕𝑦 ⎦
( ) ( )
⎡𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎤
⎢𝐷 ( ) ( )
∇{ ( )} × 𝑉 = 𝑃{ ( )} + (−1) 𝑉 +𝐷 𝑉 ⎥ (51)
⎢ ( ) ( )

⎣𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎦
𝜕 𝜕
⎡ 𝐷 𝑉 + 𝐷 (𝑉 ) ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥
⎢𝜕 𝜕 ⎥
𝑃 { ( )} = ∇ × 𝑃 { }
+ (−1) ⎢ 𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎥ (52)
⎢ 𝜕𝑧 𝜕𝑥 ⎥
⎢𝜕 𝐷 (𝑉 ) +
𝜕
𝐷 𝑉 ⎥
⎣𝜕𝑥 𝜕𝑦 ⎦
𝐷 (𝑉 )
{ }
𝑃 =− 𝐷 𝑉 (53)
𝐷 (𝑉 )
𝑘 = 1,2, …
For a proof of the Curl Cascade Theorem, see Appendix D.

Space Derivative and Curl Estimators

Suppose for a function 𝑣(𝑠) there are M sampling data available: 𝑣(𝑠 + ∆ ), 𝑖 = 1,2, … , 𝑀. Expressing
them in Taylor series, we have
∆ 𝑑 𝑣(𝑠)
𝑣(𝑠 + ∆ ) = , 𝑖 = 1,2, … , 𝑀
𝑘! 𝑑

Cut the series off at M, we can get derivatives estimations by solving a set of linear equations:

∆ 𝑑 𝑣(𝑠)
𝑣(𝑠 + ∆ ) ≈ , 𝑖 = 1,2, … , 𝑀
𝑘! 𝑑
1 1 𝑑𝑣(𝑠) 𝑑𝑣(𝑠)
⎡∆ ∆ … ∆ ⎤⎡ ⎤ ⎡ ⎤
𝑣(𝑠 + ∆ ) − 𝑣(𝑠) ⎢ 2! 𝑀! ⎥⎢ 𝑑𝑠 ⎥ ⎢ 𝑑𝑠 ⎥
⎢ 1 1 ⎥⎢ 𝑑 𝑣(𝑠) ⎥ ⎢ 𝑑 𝑣(𝑠) ⎥
𝑣(𝑠 + ∆ ) − 𝑣(𝑠)
≈ ⎢∆ 2!
∆ …
𝑀!

⎥ ⎢ 𝑑𝑠 ⎥ = 𝐴 ⎢ 𝑑𝑠 ⎥

𝑣(𝑠 + ∆ ) − 𝑣(𝑠) ⎢ ⋮ ⋮ … ⋮ ⎥⎢ ⋮ ⎥ ⎢ ⋮ ⎥
⎢ 1 1 ⎥ ⎢𝑑 𝑣(𝑠)⎥ ⎢𝑑 𝑣(𝑠)⎥
∆ ∆ … ∆
⎣ 2! 𝑀! ⎦ ⎣ 𝑑𝑠 ⎦ ⎣ 𝑑𝑠 ⎦

1 1
⎡∆ ∆ … ∆ ⎤
⎢ 2! 𝑀! ⎥ 𝑎 𝑎 … 𝑎
⎢ 1 1 ⎥ 𝑎 𝑎 … 𝑎
𝐴 = ⎢∆ 2!
∆ …
𝑀!

⎥,𝐴 =
⋮ ⋮ … ⋮
⎢ ⋮ ⋮ … ⋮ ⎥ 𝑎 𝑎 … 𝑎
⎢ 1 1 ⎥
∆ ∆ … ∆
⎣ 2! 𝑀! ⎦

𝑑𝑣(𝑠)
⎡ ⎤
⎢ 𝑑𝑠 ⎥ 𝑣(𝑠 + ∆ ) − 𝑣(𝑠) 𝑎 𝑎 … 𝑎 𝑣(𝑠 + ∆ ) − 𝑣(𝑠)
⎢ 𝑑 𝑣(𝑠) ⎥ 𝑣(𝑠 + ∆ ) − 𝑣(𝑠) 𝑎 𝑎 … 𝑎 𝑣(𝑠 + ∆ ) − 𝑣(𝑠)
⎢ 𝑑𝑠 ⎥ ≈ 𝐴 ⋮
=
⋮ ⋮ … ⋮ ⋮
⎢ ⋮ ⎥ 𝑣(𝑠 + ∆ ) − 𝑣(𝑠) 𝑎 𝑎 … 𝑎 𝑣(𝑠 + ∆ ) − 𝑣(𝑠)
⎢𝑑 𝑣(𝑠)⎥
⎣ 𝑑𝑠 ⎦

Using notation (46) and introducing a new symbol 𝔇 (𝑉 ) for derivative estimations, the above derivative
estimations for a 3D vector can be written as
𝑉 (𝑥 + ∆ , 𝑦, 𝑧) − 𝑉 (𝑥, 𝑦, 𝑧)
𝜕 𝑉 𝑉 (𝑥 + ∆ , 𝑦, 𝑧) − 𝑉 (𝑥, 𝑦, 𝑧)
𝐷 (𝑉 ) = ≈ 𝔇 (𝑉 ) ≡ [𝑎 𝑎 … 𝑎 ]
𝜕𝑥 ⋮
𝑉 (𝑥 + ∆ , 𝑦, 𝑧) − 𝑉 (𝑥, 𝑦, 𝑧)
𝑉 (𝑥, 𝑦 + ∆ , 𝑧) − 𝑉 (𝑥, 𝑦, 𝑧)
𝜕 𝑉 𝑉 (𝑥, 𝑦 + ∆ , 𝑧) − 𝑉 (𝑥, 𝑦, 𝑧)
𝐷 (𝑉 ) = ≈ 𝔇 (𝑉 ) ≡ [𝑎 𝑎 … 𝑎 ]
𝜕𝑦 ⋮
𝑉 (𝑥, 𝑦 + ∆ , 𝑧) − 𝑉 (𝑥, 𝑦, 𝑧) (64)
𝑉 (𝑥, 𝑦, 𝑧 + ∆ ) − 𝑉 (𝑥, 𝑦, 𝑧)
𝜕 𝑉 𝑉 (𝑥, 𝑦, 𝑧 + ∆ ) − 𝑉 (𝑥, 𝑦, 𝑧)
𝐷 (𝑉 ) = ≈ 𝔇 (𝑉 ) ≡ [𝑎 𝑎 … 𝑎 ]
𝜕𝑧 ⋮
𝑉 (𝑥, 𝑦, 𝑧 + ∆ ) − 𝑉 (𝑥, 𝑦, 𝑧)

𝑢 𝑐𝑎𝑛 𝑏𝑒 𝑥, 𝑦, 𝑧; ℎ = 1,2, … , 𝑀

Using the above derivative estimators and introducing a new symbol ∇ × 𝑉 for curl estimation, the curl
estimations can be written as

𝐷 (𝑉 ) − 𝐷 𝑉 𝔇 (𝑉 ) − 𝔇 𝑉
{ }
∇ × 𝑉 = 𝐷 (𝑉 ) − 𝐷 (𝑉 ) ≈ ∇ × 𝑉 ≡ ∇ × 𝑉 ≡ 𝔇 (𝑉 ) − 𝔇 (𝑉 ) (65)
𝐷 𝑉 − 𝐷 (𝑉 ) 𝔇 𝑉 − 𝔇 (𝑉 )

𝔇 (∇ ∙ 𝑉) 𝔇 (𝑉 ) + 𝔇 (𝑉 ) + 𝔇 (𝑉 )
{ }
∇{ } × 𝑉 = ∇(∇ ∙ 𝑉) − ∆𝑉 ≈ ∇ × 𝑉 ≡ 𝔇 (∇ ∙ 𝑉) − 𝔇 𝑉 + 𝔇 𝑉 + 𝔇 𝑉 (66)
𝔇 (∇ ∙ 𝑉) 𝔇 (𝑉 ) + 𝔇 (𝑉 ) + 𝔇 (𝑉 )

Using above estimation symbols, the Curl Cascade Theorem can be written using estimators.

Curl Cascade Estimations:


𝐷 (𝑉 ) 𝔇 (𝑉 )
{ }
𝑃 { }
=− 𝐷 𝑉 ≈𝑃 ≡− 𝔇 𝑉 (67)
𝐷 (𝑉 ) 𝔇 (𝑉 )
⎡ 𝔇𝑦 𝔇 (𝑉 ) − 𝔇𝑧 𝔇 𝑉⎤
{ } { } ⎢ ⎥
𝑃{ }
≈𝑃 ≡∇×𝑃 + (−1) ⎢ 𝔇𝑧 𝔇 (𝑉 ) − 𝔇𝑥 𝔇 (𝑉 ) ⎥ (68)
⎢ ⎥
⎣𝔇𝑥 𝔇 𝑉 − 𝔇𝑦 𝔇 (𝑉 ) ⎦
𝔇 (𝑉 ) − 𝔇 𝑉
{ } { } { }
∇ ×𝑉 ≈∇ ×𝑉 ≡𝑃 + (−1) 𝔇 (𝑉 ) − 𝔇 (𝑉 ) (69)
𝔇 𝑉 −𝔇 (𝑉 )
⎡ 𝔇𝑦 𝔇 𝑉 + 𝔇𝑧 𝔇 (𝑉 ) ⎤
{ ( )} { } ⎢ ⎥
𝑃{ ( )}
≈𝑃 ≡∇×𝑃 + (−1) ⎢ 𝔇𝑧 𝔇 (𝑉 ) + 𝔇𝑥 𝔇 (𝑉 ) ⎥ (70)
⎢ ⎥
⎣𝔇𝑥 𝔇 (𝑉 ) + 𝔇𝑦 𝔇 𝑉 ⎦
( ) ( )
⎡𝔇 (𝑉 ) + 𝔇 (𝑉 ) ⎤
{ ( )} { ( )}
∇{ ( )}
×𝑉 ≈∇ ×𝑉 ≡𝑃 + (−1) ⎢𝔇 ( )
𝑉 +𝔇
( )
𝑉 ⎥ (71)
⎢ ( ) ( )

⎣𝔇 (𝑉 ) + 𝔇 (𝑉 ) ⎦
𝑀−1
, 𝑀 ≥ 3 𝑖𝑠 𝑜𝑑𝑑
𝑘 = 1,2, … , 2 (72)
𝑀
− 1, 𝑀 ≥ 4 𝑖𝑠 𝑒𝑣𝑒𝑛
2

Formula (65) to (72) gives curl estimations from curl 1 to curl M:


{ }
∇ × 𝑉, 𝑘 = 1,2, … , 𝑀

Apply the above results to the Maxwell equations (1) to (4), we get field curl estimations, as listed
below.

𝔇𝑦 (𝐸𝑧 ) − 𝔇𝑧 𝐸𝑦
{ }
∇ ×𝐸 = 𝔇𝑧 (𝐸𝑥 ) − 𝔇𝑥 (𝐸𝑧 )
𝔇𝑥 𝐸𝑦 − 𝔇𝑦 (𝐸𝑥 )
(73)
𝔇𝑦 (𝐻𝑧 ) − 𝔇𝑧 𝐻𝑦
{ }
∇ ×𝐻 = 𝔇𝑧 (𝐻𝑥 ) − 𝔇𝑥 (𝐻𝑧 )
𝔇𝑥 𝐻𝑦 − 𝔇𝑦 (𝐻𝑥 )

𝔇𝑥 (𝐸𝑥 ) + 𝔇𝑦 (𝐸𝑥 ) + 𝔇𝑧 (𝐸𝑥 )


2 2 2
𝔇𝑥 (𝜌/𝜀)
{ }
∇ ×𝐸 = 𝔇𝑦 (𝜌/𝜀) − 𝔇2𝑥 𝐸𝑦 + 𝔇2𝑦 𝐸𝑦 + 𝔇2𝑧 𝐸𝑦 (74)
𝔇𝑧 (𝜌/𝜀) 2
𝔇𝑥 (𝐸𝑧 ) + 𝔇2𝑦 (𝐸𝑧 ) + 𝔇2𝑧 (𝐸𝑧 )
𝔇𝑥 (𝐻𝑥 ) + 𝔇𝑦 (𝐻𝑥 ) + 𝔇𝑧 (𝐻𝑥 )
2 2 2

{ }
∇ ×𝐻 =− 2
𝔇𝑥 𝐻𝑦 + 𝔇𝑦 𝐻𝑦 + 𝔇𝑧 𝐻𝑦
2 2

𝔇𝑥 (𝐻𝑧 ) + 𝔇𝑦 (𝐻𝑧 ) + 𝔇𝑧 (𝐻𝑧 )


2 2 2

{ } { }
∇ × 𝐸 𝑎𝑛𝑑 ∇ × 𝐻, ℎ = 3,4, … , 𝑀 𝑎𝑟𝑒 𝑐𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑒𝑑 𝑏𝑦 (65) 𝑡𝑜 (71) (75)

{ } { }
Use ∇ × 𝐸 𝑎𝑛𝑑 ∇ × 𝐻 to replace ∇{ } × 𝐸 𝑎𝑛𝑑 ∇{ } × 𝐻 in (44) and (45), we finally get generic
forms of FDTD.

Generic Forms of FDTD

𝐶𝑎𝑠𝑒 1: 𝐻(𝑡 ), 𝑞 = 0 →
∆ 1 (−1) { } [ ]
𝐻(𝑡 + ∆ ) ≈ 𝐻(𝑡 ) + ∇ × 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜇 (𝜀𝜇)
( )
∆ (−1) { ( )} [ ( )]
+ ∇ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2(𝑘 + 1))! (𝜀𝜇)
𝐶𝑎𝑠𝑒 2: 𝐻(𝑡 − ∆ ), 𝑞 = 1, ∆ = ∆ →
2∆ 1 (−1) { } [ ]
𝐻(𝑡 + ∆ ) ≈ 𝐻(𝑡 − ∆ ) + ∇ × 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜇 (𝜀𝜇)
(76)
𝐶𝑎𝑠𝑒 3: 𝐻(𝑡 ), 𝐻 𝑡 − ∆ , 𝑞 > 0, 𝑞 = 1,2, … , 𝑞 →
𝐻(𝑡 + ∆ ) ≈ 𝐻 𝑡 −∆ + (1 − 𝑞 )𝐻(𝑡 )

∆ +∑ ∆ 1 (−1) { } [ ]
+ ∇ × 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜇 (𝜀𝜇)
( ) ( )
∆ −∑ ∆ (−1) { ( )} [ ( )]
+ ∇ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2(𝑘 + 1))! (𝜀𝜇)

𝑘 ≥0

Case 1: 𝐸(𝑡 ), 𝑞 = 0 →
( )
∆ (−1) { } [ ]
𝐸(𝑡 + ∆ ) = 𝐸(𝑡 ) + ∇ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜀(𝜀𝜇)
( )
∆ (−1) { ( )} [ ( )]
+ ∇ × 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2(𝑘 + 1))! (𝜀𝜇) (77)
𝐶𝑎𝑠𝑒 2: 𝐸(𝑡 − ∆ ), 𝑞 = 1, ∆ = ∆ →
( )
2∆ (−1) { } [ ]
𝐸(𝑡 + ∆ ) = 𝐸(𝑡 − ∆ ) + ∇ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜀(𝜀𝜇)

𝐸(𝑡 ), 𝐸 𝑡 − ∆ ,𝑞 > 0 →
𝐸(𝑡 + ∆ ) = 𝐸 𝑡 −∆ + (1 − 𝑞 )𝐸(𝑡 )

( )
∆ +∑ ∆ (−1) { } [ ]
+ ∇ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2𝑘 + 1)! 𝜀(𝜀𝜇)
( ) ( )
∆ −∑ ∆ (−1) { ( )} [ ( )]
+ ∇ × 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2(𝑘 + 1))! (𝜀𝜇)

𝑘 ≥0
Note that 𝑘 , ∆ , ∆ in (76) and (77) do not have to be of the same values. The matrix 𝐴 , which
defines the derivative estimator (64), can also be different for (76) and (77). Actually when we are
handling boundary conditions, we may choose different values for these parameters.

Deriving of Special Algorithms


The Time-Space Synchronized FDTD
Assume 𝐸 , 𝐸 , 𝐸 , 𝐻 , 𝐻 , 𝑎𝑛𝑑 𝐻 𝑎𝑟𝑒 𝑎𝑙𝑙 𝑎𝑡 𝑡ℎ𝑒 𝑠𝑎𝑚𝑒 𝑠𝑝𝑎𝑐𝑒 𝑙𝑜𝑐𝑎𝑡𝑖𝑜𝑛 (𝑥, 𝑦, 𝑧), 𝑎𝑛𝑑 let

𝑡 =𝑡
(76) and (77) give 𝐻(𝑡 + 𝑞∆ ), 𝐸(𝑡 + 𝑞∆ ), 𝑞 = 1,2,3, …

Such an algorithm allows us to calculate Poynting vector to study field energy transfer:

𝑆 =𝐸×𝐻
It also allows us to calculate divergence as estimation errors:

𝑒𝑟𝑟𝑜𝑟 = 𝔇𝑥 (𝐻𝑥 ) + 𝔇𝑦 𝐻𝑦 + 𝔇𝑧 (𝐻𝑧 ) (78)


𝑒𝑟𝑟𝑜𝑟 = 𝔇𝑥 (𝐸𝑥 ) + 𝔇𝑦 𝐸𝑦 + 𝔇𝑧 (𝐸𝑧 ) − 𝜌/𝜀 (79)
These errors may give precise comparisons of different algorithms’ accuracy. That is, for two algorithms,
we can precisely say which one is more accurate.

Time-shifted Space-synchronized FDTD


Assume 𝐸 , 𝐸 , 𝐸 , 𝐻 , 𝐻 , 𝑎𝑛𝑑 𝐻 𝑎𝑟𝑒 𝑎𝑙𝑙 𝑎𝑡 𝑡ℎ𝑒 𝑠𝑎𝑚𝑒 𝑠𝑝𝑎𝑐𝑒 𝑙𝑜𝑐𝑎𝑡𝑖𝑜𝑛 (𝑥, 𝑦, 𝑧), 𝑎𝑛𝑑 let
1
𝑡 = 𝑡 + ∆𝑡
2

(76) and (77) give 𝐻 𝑡 + ∆ + 𝑞∆ , 𝐸(𝑡 + 𝑞∆ ), 𝑞 = 0,1,2,3, …

It gives a higher accuracy than the Time-Space Synchronized schemes.

But we can no longer calculate Poynting vector.

We can still calculate divergences to get precise estimation errors.

The Yee algorithm


In (76) and (77), choose following parameters:

𝑘 = 0 𝑓𝑜𝑟 𝑏𝑜𝑡ℎ (76)𝑎𝑛𝑑 (77)


1
∆𝑡 𝑖𝑠 𝑡ℎ𝑒 𝑠𝑎𝑚𝑒 𝑓𝑜𝑟 𝑏𝑜𝑡ℎ (76)𝑎𝑛𝑑 (77), 𝑏𝑢𝑡 𝑢𝑠𝑒 ∆𝑡
2
1
𝑡 = 𝑡 + ∆𝑡
2
We get Yee style time advancement:
1 1 1 { } [ ]
𝐻 𝑡 + ∆ ≈𝐻 𝑡 − ∆ +∆ ∇ × 𝐸(𝑡 ) + 𝐽 (𝑡 )
2 2 𝜇
1 { } 1 1 (78)
[ ]
𝐸(𝑡 + ∆ ) = 𝐸(𝑡 ) + ∆ ∇ × 𝐻(𝑡 + ∆𝑡) + 𝐽 𝑡 + ∆𝑡
𝜀 2 2
( ∆ ) ( )
For curl estimation, choose M=1, the inverse matrix is simply . The derivative estimator is .
∆ ∆
The curl estimations become

𝔇𝑦 (𝐸𝑧 ) − 𝔇𝑧 𝐸𝑦 𝐸𝑧 (𝑥, 𝑦 + ∆𝑠, 𝑧) − 𝐸𝑧 (𝑥, 𝑦, 𝑧) − 𝐸𝑦 (𝑥, 𝑦, 𝑧 + ∆𝑠) + 𝐸𝑦 (𝑥, 𝑦, 𝑧)


{ } 1
∇ ×𝐸 = 𝔇𝑧 (𝐸𝑥 ) − 𝔇𝑥 (𝐸𝑧 ) = 𝐸𝑥 (𝑥, 𝑦, 𝑧 + ∆𝑠) − 𝐸𝑥 (𝑥, 𝑦, 𝑧) − 𝐸𝑧 (𝑥 + ∆𝑠, 𝑦, 𝑧) + 𝐸𝑧 (𝑥, 𝑦, 𝑧)
∆𝑠
𝔇𝑥 𝐸𝑦 − 𝔇𝑦 (𝐸𝑥 ) 𝐸𝑦 (𝑥 + ∆𝑠, 𝑦, 𝑧) − 𝐸𝑦 (𝑥, 𝑦, 𝑧) − 𝐸𝑥 (𝑥, 𝑦 + ∆𝑠, 𝑧) + 𝐸𝑥 (𝑥, 𝑦, 𝑧)

𝔇𝑦 (𝐻𝑧 ) − 𝔇𝑧 𝐻𝑦 𝐻𝑧 (𝑥, 𝑦, 𝑧) − 𝐻𝑧 (𝑥, 𝑦 − ∆𝑠, 𝑧) − 𝐻𝑦 (𝑥, 𝑦, 𝑧) + 𝐻𝑦 (𝑥, 𝑦, 𝑧 − ∆𝑠)


{ } 1
∇ ×𝐻 = 𝔇𝑧 (𝐻𝑥 ) − 𝔇𝑥 (𝐻𝑧 ) = 𝐻𝑥 (𝑥, 𝑦, 𝑧) − 𝐻𝑥 (𝑥, 𝑦, 𝑧 − ∆𝑠) − 𝐻𝑧 (𝑥, 𝑦, 𝑧) + 𝐻𝑧 (𝑥 − ∆𝑠, 𝑦, 𝑧)
∆𝑠
𝔇𝑥 𝐻𝑦 − 𝔇𝑦 (𝐻𝑥 ) 𝐻𝑦 (𝑥, 𝑦, 𝑧) − 𝐻𝑦 (𝑥 − ∆𝑠, 𝑦, 𝑧) − 𝐻𝑥 (𝑥, 𝑦, 𝑧) + 𝐻𝑥 (𝑥, 𝑦 − ∆𝑠, 𝑧)

Before substituting the above estimations into (78), we notice that to get a derivative
∆ ( ∆ ) ( ∆ ) ( )
at 𝑣(𝑠), is one order more accurate than . Yee gave a clever way to do it by
∆ ∆
defining
1 1 1
⎡𝐸 (𝑥 + ∆𝑠, 𝑦, 𝑧)⎤ ⎡𝐻 (𝑥, 𝑦 + ∆𝑠, 𝑧 + ∆𝑠)⎤
⎢ 2 ⎥ ⎢ 2 2 ⎥
1 1 1
𝐸 → 𝐸 (𝑥, 𝑦 + ∆𝑠, 𝑧) , 𝐻 → 𝐻 (𝑥 + ∆𝑠, 𝑦, 𝑧 + ∆𝑠)⎥
⎢ ⎥ ⎢
⎢ 2 ⎥ ⎢ 2 2 ⎥
⎢ 1 ⎥ ⎢ 1 1 ⎥
⎣ 𝐸 (𝑥, 𝑦, 𝑧 + 2 ∆𝑠) ⎦ ⎣ 𝐻 (𝑥 + 2 ∆𝑠, 𝑦 + 2 ∆𝑠, 𝑧) ⎦
The curl estimations become

1 1 1 1
⎡ 𝐸𝑧 𝑥, 𝑦 + ∆𝑠, 𝑧 +
2
∆𝑠 − 𝐸𝑧 𝑥, 𝑦, 𝑧 +
2
∆𝑠 − 𝐸𝑦 𝑥, 𝑦 +
2
∆𝑠, 𝑧 + ∆𝑠 + 𝐸𝑦 (𝑥, 𝑦 +
2
∆𝑠, 𝑧) ⎤

{ } 1 ⎢ 1 1 1 1 ⎥
∇ × 𝐸 = ⎢ 𝐸𝑥 𝑥+ ∆𝑠, 𝑦, 𝑧 + ∆𝑠 − 𝐸𝑥 𝑥 + ∆𝑠, 𝑦, 𝑧 − 𝐸𝑧 𝑥 + ∆𝑠, 𝑦, 𝑧 + ∆𝑠 + 𝐸𝑧 (𝑥, 𝑦, 𝑧 + ∆𝑠) ⎥
∆𝑠 ⎢ 2 2 2 2 ⎥
⎢ 1 1 1 1 ⎥
⎣𝐸𝑦 𝑥 + ∆𝑠, 𝑦 +
2
∆𝑠, 𝑧 − 𝐸𝑦 𝑥, 𝑦 +
2
∆𝑠, 𝑧 − 𝐸𝑥 𝑥 +
2
∆𝑠, 𝑦 + ∆𝑠, 𝑧 + 𝐸𝑥 (𝑥 + ∆𝑠, 𝑦, 𝑧)
2 ⎦
1 1 1 1 1 1 1 1
⎡ 𝐻𝑧 𝑥+
2
∆𝑠, 𝑦 +
2
∆𝑠, 𝑧 − 𝐻𝑧 𝑥 +
2
∆𝑠, 𝑦 −
2
∆𝑠, 𝑧 − 𝐻𝑦 𝑥 +
2
∆𝑠, 𝑦, 𝑧 +
2
∆𝑠 + 𝐻𝑦 (𝑥 +
2
∆𝑠, 𝑦, 𝑧 −
2
∆𝑠) ⎤

{ } 1 ⎢ 1 1 1 1 1 1 1 1 ⎥
∇ × 𝐻 = ⎢ 𝐻𝑥 𝑥, 𝑦 + ∆𝑠, 𝑧 + ∆𝑠 − 𝐻𝑥 𝑥, 𝑦 + ∆𝑠, 𝑧 − ∆𝑠 − 𝐻𝑧 𝑥 + ∆𝑠, 𝑦 + ∆𝑠, 𝑧 + 𝐻𝑧 (𝑥 − ∆𝑠, 𝑦 + ∆𝑠, 𝑧) ⎥
∆𝑠 ⎢ 2 2 2 2 2 2 2 ⎥ 2
⎢ 1 1 1 1 1 1 1 ⎥ 1
⎣𝐻𝑦 𝑥+
2
∆𝑠, 𝑦, 𝑧 +
2
∆𝑠 − 𝐻𝑦 𝑥 −
2
∆𝑠, 𝑦, 𝑧 +
2
∆𝑠 − 𝐻𝑥 𝑥, 𝑦 +
2
∆𝑠, 𝑧 +
2
∆𝑠 + 𝐻𝑥 (𝑥, 𝑦 − ∆𝑠, 𝑧 + ∆𝑠)
2 2 ⎦
We can see that
1
⎡𝐸 (𝑥 + ∆𝑠, 𝑦, 𝑧)⎤
⎢ 2 ⎥
{ } 1
𝑐𝑒𝑛𝑡𝑒𝑟 𝑜𝑓 ∇ × 𝐸 𝑖𝑠 𝑎𝑡 ⎢𝐸 (𝑥, 𝑦 + ∆𝑠, 𝑧)⎥
⎢ 2 ⎥
⎢ 1 ⎥
⎣ 𝐸 (𝑥, 𝑦 + 2 ∆𝑠, 𝑧) ⎦
1 1
⎡𝐻 (𝑥, 𝑦 + ∆𝑠, 𝑧 + ∆𝑠)⎤
⎢ 2 2 ⎥
{ } 1 1
𝑐𝑒𝑛𝑡𝑒𝑟 𝑜𝑓 ∇ × 𝐻 𝑖𝑠 𝑎𝑡 ⎢𝐻 (𝑥 + ∆𝑠, 𝑦, 𝑧 + ∆𝑠)⎥
⎢ 2 2 ⎥
⎢ 1 1 ⎥
⎣ 𝐻 (𝑥 + 2 ∆𝑠, 𝑦 + 2 ∆𝑠, 𝑧) ⎦
Now we get the standard Yee algorithm.

By deriving the Yee algorithm from the generic forms, we can clearly see the advantages and limitations
of it.

Advantages:

 Use calculation amount of a first order estimation to get a second order precision
{ } { }
 Higher order curl estimations can be used to increase accuracy of ∇ ×, but not ∇ ×, ℎ > 1

Limitations
{ }
 Because not more than one curl can be used, that is, ∇ ×, ℎ > 1, cannot be used, the time
advancement order cannot be higher than 2
 Cannot calculate Poynting vector
 Cannot calculate divergences

Conclusion
For estimations purely based on Taylor’s series, this paper shows that there is a generic form of FDTD
algorithm for the Maxwell equations. By “generic” I mean that all other forms of the algorithms are
special cases of this generic form. But it does not cover those algorithms involving techniques other than
the Taylor series. However, almost all estimation algorithms involve the use of Taylor series in one way
or the other. Therefore, this generic form may be used as a base for developing new algorithms, or to
improve the existing algorithms.

References
[1] K. S. Yee, Numerical Solution of Initial Boundary Value Problems Involving Maxwell’s Equations in
Isotropic Media, IEEE Trans. Antennas Propagat., Vol. 14, 302-307 (1966)

[2] David Ge, Time-Space-Synchronized FDTD Method,


https://github.com/DavidGeUSA/TSS/blob/master/TSS.pdf

[3] Charles W. Manry Jr, Shira L. Broschat, and John B. Schneider, Higher-Order FDTD Methods for Large
Problems, https://core.ac.uk/display/24329792
Appendixes
Appendix A. Proof of Time Space Lemma
Proof.

Consider the case when 𝑘 = 0.

For 𝑘 = 0 (16) becomes


𝜕𝐻 1
=− ∇×𝐸
𝜕𝑡 𝜇
It is (4). So (16) holds.

For 𝑘 = 0 (17) becomes

𝜕 𝐻 1 1
= − ∇{ } × 𝐻 + ∇ × 𝐽 (20)
𝜕𝑡 𝜀𝜇 𝜀𝜇
By taking temporal derivative of (4) and substituting (3) into it, we have

𝜕 𝐻 1 1 1 1 1
=− ∇× ∇ × 𝐻 − 𝐽(𝑡) = − ∇{ } × 𝐻 + ∇ × 𝐽(𝑡)
𝜕𝑡 𝜇 𝜀 𝜀 𝜀𝜇 𝜀𝜇

It is the same as (20). So, (17) holds.

For 𝑘 = 0 (18) becomes

𝜕 𝐸 1 { } 1
= ∇ ×𝐻− 𝐽
𝜕𝑡 𝜀 𝜀
It is (3). So, (18) holds.

For 𝑘 = 0 (19) becomes

𝜕 𝐸 1 { } 1 𝑑𝐽
= (−1) ∇ ×𝐸− (21)
𝜕𝑡 𝜀𝜇 𝜀 𝑑𝑡
By taking temporal derivative of (3) and substituting (4) into it, we have

𝜕 𝐸 1 1 1 𝑑𝐽(𝑡) 1 1 𝑑𝐽(𝑡)
= ∇× − ∇×𝐸 − = − ∇{ } × 𝐸 −
𝜕𝑡 𝜀 𝜇 𝜀 𝑑𝑡 𝜀𝜇 𝜀 𝑑𝑡
It is the same as (21). So, (19) holds.

So, (16) – (19) hold for 𝑘 = 0.

Consider the case when 𝑘 = 1.

For 𝑘 = 1 (16) becomes

𝜕 𝐻 1 1 1 𝑑 𝐽
= ∇{ } × 𝐸 + ∇{ } × (22)
𝜕𝑡 𝜇 (𝜀𝜇) 𝜀𝜇 𝑑𝑡
Taking the temporal derivative on (20) and substituting (4) into it, we have
𝜕 𝐻 1 1 1 𝑑𝐽 1 1 { } 1 𝑑𝐽
= − ∇{ } × − ∇ × 𝐸 + ∇ × = ∇ ×𝐸+ ∇×
𝜕𝑡 𝜀𝜇 𝜇 𝜀𝜇 𝑑𝑡 𝜇 𝜀𝜇 𝜀𝜇 𝑑𝑡
It is the same as (22). So, (16) holds.

For 𝑘 = 1 (17) becomes

𝜕 𝐻 1 1 𝑑 𝐽 1
= ∇{ } × 𝐻 + ∇ × − ∇{ } × 𝐽 (23)
𝜕𝑡 (𝜀𝜇) 𝜀𝜇 𝑑𝑡 (𝜀𝜇)
Taking the temporal derivative on (22) and substituting (3) into it, we have

𝜕 𝐻 1 1 1 1 1 𝑑 𝐽
= ∇{ } × ∇ × 𝐻 − 𝐽 + ∇{ } ×
𝜕𝑡 𝜇 (𝜀𝜇) 𝜀 𝜀 𝜀𝜇 𝑑𝑡
1 1 1 𝑑 𝐽
= ∇{ } × 𝐻 − ∇{ } × 𝐽 + ∇{ } ×
(𝜀𝜇) (𝜀𝜇) 𝜀𝜇 𝑑𝑡
It is the same as (23). So, (17) holds.

For 𝑘 = 1 (18) becomes

𝜕 𝐸 1 1 1𝑑 𝐽 1 1
=− ∇{ } × 𝐻 − + ∇{ } × 𝐽 (24)
𝜕𝑡 𝜀 (𝜀𝜇) 𝜀 𝑑𝑡 𝜀 (𝜀𝜇)
Taking the temporal derivative on (21) and substituting (3) into it, we have

𝜕 𝐸 1 { } 1 1 1𝑑 𝐽 1 1 { } 11 { } 1𝑑 𝐽
= (−1) ∇ × ∇×𝐻− 𝐽 − =− ∇ ×𝐻+ ∇ ×𝐽−
𝜕𝑡 𝜀𝜇 𝜀 𝜀 𝜀 𝑑𝑡 𝜀 𝜀𝜇 𝜀 𝜀𝜇 𝜀 𝑑𝑡
It is the same as (24). So, (18) holds.

For 𝑘 = 1 (19) becomes

𝜕 𝐸 1 1𝑑 𝐽 1 1 { } 𝑑 𝐽
= ∇{ } × 𝐸 − + ∇ × (25)
𝜕𝑡 (𝜀𝜇) 𝜀 𝑑𝑡 𝜀 𝜀𝜇 𝑑𝑡
Taking the temporal derivative on (24) and substituting (4) into it, we have

𝜕 𝐸 1 1 1 1𝑑 𝐽 1 1 𝑑𝐽
=− ∇{ } × − ∇ × 𝐸 − + ∇{ } ×
𝜕𝑡 𝜀 (𝜀𝜇) 𝜇 𝜀 𝑑𝑡 𝜀 (𝜀𝜇) 𝑑𝑡
1 1𝑑 𝐽 1 1 𝑑𝐽
= ∇{ } × 𝐸 − + ∇{ } ×
(𝜀𝜇) 𝜀 𝑑𝑡 𝜀 (𝜀𝜇) 𝑑𝑡
It is the same as (25). So, (19) holds.

So, (16) – (19) hold for 𝑘 = 1.

Assume for a 𝑘 ≥ 1, (16) – (19) hold. Consider the case of 𝑘 + 1.

For 𝑘 + 1, (16) becomes

𝜕 ( )
𝐻 1 (−1) (−1) 𝑑 ( )
𝐽
= ∇{ }
×𝐸+ ∇{ }
× (26)
𝜕𝑡 ( ) 𝜇 (𝜀𝜇) (𝜀𝜇) 𝑑𝑡 ( )

Taking the temporal derivative on (17) and substituting (4) into it, we have
𝜕 ( )
𝐻 (−1) 1 (−1) { 𝑑 ( )
𝐽
= ∇{ ( )}
× − ∇×𝐸 + ∇ }
×
𝜕𝑡 ( ) (𝜀𝜇) 𝜇 (𝜀𝜇) 𝑑𝑡 ( )

1 (−1) (−1) 𝑑 ( ( ))
𝐽
= ∇{ ( ) }
×𝐸+ ∇{ ( ) }
×
𝜇 (𝜀𝜇) (𝜀𝜇) 𝑑𝑡 ( ( ))

It is the same as (26). So, (16) holds for 𝑘 + 1.

For 𝑘 + 1, (17) becomes

𝜕 ( )
𝐻 (−1) (−1) { 𝑑 ( )
𝐽
= ∇{ ( )}
×𝐻+ ∇ }
× (27)
𝜕𝑡 ( ) (𝜀𝜇) (𝜀𝜇) 𝑑𝑡 ( )

Taking the temporal derivative on (26) and substituting (3) into it, we have

𝜕 ( )
𝐻 1 (−1) 1 1 (−1) 𝑑 ( )
𝐽
= ∇{ }
× ∇×𝐻− 𝐽 + ∇{ }
×
𝜕𝑡 ( ) 𝜇 (𝜀𝜇) 𝜀 𝜀 (𝜀𝜇) 𝑑𝑡 ( )

(−1) { }
(−1) { }
(−1) { } 𝑑 ( )
𝐽
= ∇ ×𝐻− ∇ ×𝐽+ ∇ ×
(𝜀𝜇) (𝜀𝜇) (𝜀𝜇) 𝑑𝑡 ( )

(−1) (−1) { 𝑑 ( )
𝐽
= ∇{ }
×𝐻+ ∇ }
×
(𝜀𝜇) (𝜀𝜇) 𝑑𝑡 ( )

It is the same as (27). So, (17) holds for 𝑘 + 1.

For 𝑘 + 1, (18) becomes

𝜕 𝐸 1 1 1 (−1) 𝑑 ( )
𝐽
= (−1) ∇{ }
×𝐻+ ∇{ }
× (28)
𝜕𝑡 𝜀 (𝜀𝜇) 𝜀 (𝜀𝜇) 𝑑𝑡 ( )

Taking the temporal derivative on (19) and substituting (3) into it, we have

𝜕 ( )
𝐸 (−1) 1 1 1 (−1) 𝑑 ( )
𝐽
= ∇{ ( )}
× ∇×𝐻− 𝐽 + ∇{ }
×
𝜕𝑡 ( ) (𝜀𝜇) 𝜀 𝜀 𝜀 (𝜀𝜇) 𝑑𝑡 ( )

1 (−1) 1 (−1)
= ∇{ ( ) }
×𝐻− ∇{ ( )}
×𝐽
𝜀 (𝜀𝜇) 𝜀 (𝜀𝜇)
1 (−1) 𝑑 ( )
𝐽
+ ∇{ }
×
𝜀 (𝜀𝜇) 𝑑𝑡 ( )

1 (−1) 1 (−1) 𝑑 ( )
𝐽
= ∇{ ( ) }
×𝐻+ ∇{ }
×
𝜀 (𝜀𝜇) 𝜀 (𝜀𝜇) 𝑑𝑡 ( )

It is the same as (28). So, (18) holds for 𝑘 + 1.

For 𝑘 + 1, (19) becomes,


𝜕 ( )
𝐸 (−1) 1 (−1) 𝑑 ( )
𝐽
= ∇{ ( )}
×𝐸+ ∇{ }
× (29)
𝜕𝑡 ( ) (𝜀𝜇) 𝜀 (𝜀𝜇) 𝑑𝑡 ( )

Taking the temporal derivative on (28) and substituting (4) into it, we have

𝜕 𝐸 1 (−1) { } 1 1 (−1) { } 𝑑 ( )
𝐽
= ∇ × − ∇×𝐸 + ∇ ×
𝜕𝑡 𝜀 (𝜀𝜇) 𝜇 𝜀 (𝜀𝜇) 𝑑𝑡 ( )

(−1) 1 (−1) 𝑑 ( )
𝐽
= ∇{ }
×𝐸+ ∇{ }
×
(𝜀𝜇) 𝜀 (𝜀𝜇) 𝑑𝑡 ( )

It is the same as (29). So, (19) holds for 𝑘 + 1.

So, (16) – (19) hold for 𝑘 + 1.

Thus, (16) – (19) hold for 𝑘 ≥ 0.

QED.

Appendix B. Proof of Time Advancement Theorem H


Proof.

Apply the Taylor series, we have

∆ 𝜕 𝐻(𝑡 )
𝐻(𝑡 + ∆ ) =
𝑘! 𝜕𝑡

Rearrange the terms, we have


( )
∆ 𝜕 𝐻(𝑡 ) ∆ 𝜕 ( )
𝐻(𝑡 )
𝐻(𝑡 + ∆ ) = 𝐻(𝑡 ) + + ( )
(30)
(2𝑘 + 1)! 𝜕𝑡 (2(𝑘 + 1))! 𝜕𝑡
Combine (10) and (16), (11) and (17), we have

𝜕 1 (−1) 𝐻 [ ]
∇{ = }
×𝐸+𝐽 (𝑡 ) (31)
𝜕𝑡 𝜇 (𝜀𝜇)
𝜕 ( ) 𝐻 (−1) [ ( )]
= ∇{ ( )}
×𝐻+𝐽 (𝑡 ) (32)
𝜕𝑡 ( ) (𝜀𝜇)
Substitute (31) and (32) into (30), we have

∆ 1 (−1) [ ]
𝐻(𝑡 + ∆ ) = 𝐻(𝑡 ) + ∇{ }
×𝐸+𝐽 (𝑡 )
(2𝑘 + 1)! 𝜇 (𝜀𝜇)
( ) (33)
∆ (−1) [ ( )]
+ ∇{ ( )}
×𝐻+𝐽 (𝑡 )
2(𝑘 + 1) ! (𝜀𝜇)
(33) is Case 1 of (9).

For a series of historical data

𝐻(𝑡 ), 𝐻 𝑡 − ∆ , 𝑞 > 0, 𝑞 = 1,2, … , 𝑞


We have

∆ 1 (−1) [ ]
𝐻 𝑡 +∆ = 𝐻(𝑡 ) + ∇{ }
×𝐸+𝐽 (𝑡 )
(2𝑘 + 1)! 𝜇 (𝜀𝜇)

(
(−1)
) (34)
[ ( )]
+ ∇{ ( )}
×𝐻+𝐽 (𝑡 ) ,
2(𝑘 + 1) ! (𝜀𝜇)
𝑞 = 1,2, … , 𝑞
Minus (34) from (33), we have

𝐻(𝑡 + ∆ ) = 𝐻 𝑡 −∆ + (1 − 𝑞 )𝐻(𝑡 )

∆ +∑ ∆ 1 (−1) [ ]
+
(2𝑘 + 1)! 𝜇 (𝜀𝜇)
∇{ }
× 𝐸(𝑡 ) + 𝐽 (𝑡 ) (35)
( ) ( )
∆ −∑ ∆ (−1) ( )} [ ( )]
+ ∇{ × 𝐻(𝑡 ) + 𝐽 (𝑡 )
(2(𝑘 + 1))! (𝜀𝜇)

(35) is Case 3 of (9).

Let 𝑞 = 1 𝑎𝑛𝑑 ∆ = ∆ , (35) becomes

2∆ 1 (−1)
𝐻(𝑡 + ∆ ) = 𝐻(𝑡 − ∆ ) +
(2𝑘 + 1)! 𝜇 (𝜀𝜇)
∇{ }
× 𝐸(𝑡 ) + 𝐽
[ ]
(𝑡 ) (36)
(36) is Case 2 of (9).

Thus, (9) is proved.

QED.

Appendix C. Proof of Time Advancement Theorem E


Proof.

Apply the Taylor series, we have

∆ 𝜕 𝐸(𝑡 )
𝐸(𝑡 + ∆ ) =
𝑘! 𝜕𝑡

Rearrange the terms, we have


( )
∆ 𝜕 𝐸(𝑡 ) ∆ 𝜕 ( )
𝐸(𝑡 )
𝐸(𝑡 + ∆ ) = 𝐸(𝑡 ) + + ( )
(37)
(2𝑘 + 1)! 𝜕𝑡 (2(𝑘 + 1))! 𝜕𝑡
Combine (14) and (18), (15) and (19), we have

𝜕 𝐸 1 1 [ ]
= (−1) ∇{ }
×𝐻+𝐽 (𝑡 ) (38)
𝜕𝑡 𝜀 (𝜀𝜇)
𝜕 ( )𝐸 1 [ ( )]
= (−1) ∇{ ( )}
×𝐸+𝐽 (𝑡 ) (39)
𝜕𝑡 ( ) (𝜀𝜇)
Substitute (38) and (39) into (37), we have
∆ 1 1 [ ]
𝐸(𝑡 + ∆ ) = 𝐸(𝑡 ) + (−1) ∇{ }
×𝐻+𝐽 (𝑡 )
(2𝑘 + 1)! 𝜀 (𝜀𝜇)
( ) (40)
∆ 1 { ( )} [ ( )]
+ (−1) ∇ ×𝐸+𝐽 (𝑡 )
2(𝑘 + 1) ! (𝜀𝜇)
(40) is Case 1 of (13).

For a series of historical data

𝐸(𝑡 ), 𝐸 𝑡 − ∆ , 𝑞 > 0, 𝑞 = 1,2, … , 𝑞

We have

∆ 1 1 [ ]
𝐸 𝑡 +∆ = 𝐸(𝑡 ) + (−1) ∇{ }
×𝐻+𝐽 (𝑡 )
(2𝑘 + 1)! 𝜀 (𝜀𝜇)

( )
1 (41)
[ ( )]
+ (−1) ∇{ ( )}
×𝐸+𝐽 (𝑡 ) ,
2(𝑘 + 1) ! (𝜀𝜇)
𝑞 = 1,2, … , 𝑞
Minus (41) from (40), we have

𝐸(𝑡 + ∆ ) = 𝐸 𝑡 −∆ + (1 − 𝑞 )𝐸(𝑡 )

( )
∆ +∑ ∆ (−1) { [ ]
+
(2𝑘 + 1)! 𝜀(𝜀𝜇)
∇ }
× 𝐻(𝑡 ) + 𝐽 (𝑡 ) (42)
( ) ( )
∆ −∑ ∆ (−1) [ ( )]
+ ∇{ ( )}
× 𝐸(𝑡 ) + 𝐽 (𝑡 )
(2(𝑘 + 1))! (𝜀𝜇)

(42) is Case 3 of (13).

Let 𝑞 = 1 𝑎𝑛𝑑 ∆ = ∆ , (42) becomes


( )
2∆ (−1) {
𝐸(𝑡 + ∆ ) = 𝐸(𝑡 − ∆ ) +
(2𝑘 + 1)! 𝜀(𝜀𝜇)
∇ }
× 𝐻(𝑡 ) + 𝐽
[ ]
(𝑡 ) (43)
(43) is Case 2 of (13).

Thus, (13) is proved.

QED.

Appendix D. Proof of Curls Cascade Theorem.


Proof. For 𝑘 = 1, by (53) and (50), we have
𝜕 𝜕 𝜕 𝜕 𝜕 𝜕
⎡ 𝐷 (𝑉 ) − 𝐷 𝑉 ⎤ ⎡ 𝐷 (𝑉 ) − 𝐷 𝑉 ⎤ ⎡ 𝐷 (𝑉 ) − 𝐷 𝑉 ⎤
𝐷 (𝑉 ) ⎢ 𝜕𝑦 𝜕𝑧 ⎥ ⎛⎢ 𝜕𝑦 𝜕𝑧 ⎥ ⎢ 𝜕𝑦 𝜕𝑧 ⎥⎞
⎢𝜕 𝜕 ⎥ ⎢𝜕 𝜕 ⎥ ⎢𝜕 𝜕 ⎥⎟
𝑃{ 𝐷 (𝑉 ) ⎥ = − ⎜
}
= −∇ × 𝐷 𝑉 − ⎢ 𝐷 (𝑉 ) − ⎜⎢ 𝜕𝑧 𝐷 (𝑉 ) − 𝐷 (𝑉 ) ⎥ + ⎢ 𝐷 (𝑉 ) − 𝐷 (𝑉 ) ⎥⎟
⎢ 𝜕𝑧 𝜕𝑥 ⎥ ⎜ ⎢ 𝜕𝑥 ⎥ ⎢ 𝜕𝑧 𝜕𝑥 ⎥⎟
𝐷 (𝑉 )
⎢ 𝜕 𝐷 𝑉 − 𝜕 𝐷 (𝑉 ) ⎥ ⎢𝜕 𝐷 𝑉 −
𝜕
𝐷 (𝑉 ) ⎥ ⎢
𝜕
𝐷 𝑉 −
𝜕
𝐷 (𝑉 ) ⎥
⎣𝜕𝑥 𝜕𝑦 ⎦ ⎝⎣𝜕𝑥 𝜕𝑦 ⎦ ⎣𝜕𝑥 𝜕𝑦 ⎦⎠
𝜕 𝜕 (54)
⎡ 𝐷 (𝑉 ) + 𝐷 (𝑉 ) − 𝐷 𝑉 +𝐷 𝑉 ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥
⎢𝜕 𝜕 ⎥
= −⎢ 𝐷 (𝑉 ) + 𝐷 (𝑉 ) − 𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎥
⎢ 𝜕𝑧 𝜕𝑥 ⎥
⎢ 𝜕 𝐷 𝑉 + 𝐷 𝑉 − 𝜕 𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎥
⎣𝜕𝑥 𝜕𝑦 ⎦
Insert the above result into (49), we have

𝐷 (𝑉 ) − 𝐷 𝑉
{ } { }
∇ × 𝑉 = 𝑃 − 𝐷 (𝑉 ) − 𝐷 (𝑉 )
𝐷 𝑉 − 𝐷 (𝑉 )
𝜕 𝜕
⎡ 𝐷 (𝑉 ) + 𝐷 (𝑉 ) + 𝐷 (𝑉 ) − 𝐷 𝑉 +𝐷 𝑉 +𝐷 𝑉 ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥ (55)
⎢ 𝜕 𝜕 ⎥
= −⎢ 𝐷 (𝑉 ) + 𝐷 (𝑉 ) + 𝐷 (𝑉 ) − 𝐷 (𝑉 ) + 𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎥
⎢ 𝜕𝑧 𝜕𝑥 ⎥
⎢ 𝜕 𝐷 𝑉 + 𝐷 𝑉 + 𝐷 𝑉 − 𝜕 𝐷 (𝑉 ) + 𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎥
⎣𝜕𝑥 𝜕𝑦 ⎦
= −∇ × ∆𝑉
Calculating ∇{ } × 𝑉 by (47) and (48) gives us

∇{ } × 𝑉 = ∇ × ∇{ } × 𝑉 = ∇ × (∇(∇ ∙ 𝑉) − ∆𝑉) = −∇ × ∆𝑉

It is the same as (55). So, (49) holds for 𝑘 = 1.

For 𝑘 = 1, (52) becomes

𝜕 𝜕
⎡ 𝐷 𝑉 + 𝐷 (𝑉 ) ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥
⎢𝜕 𝜕 ⎥
𝑃{ }
= ∇ × 𝑃{ } − ⎢ 𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎥ (56)
⎢ 𝜕𝑧 𝜕𝑥 ⎥
⎢ 𝜕 𝐷 (𝑉 ) + 𝜕 𝐷 𝑉 ⎥
⎣𝜕𝑥 𝜕𝑦 ⎦

Insert (56) into (51), we have


𝐷 (𝑉 ) + 𝐷 (𝑉 )
∇{ } × 𝑉 = 𝑃 { } + 𝐷 𝑉 + 𝐷 𝑉
𝐷 (𝑉 ) + 𝐷 (𝑉 )
𝜕 𝜕
⎡ 𝐷 𝑉 + 𝐷 (𝑉 ) ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥ 𝐷 (𝑉 ) + 𝐷 (𝑉 )
⎢𝜕 𝜕 ⎥
= ∇ × 𝑃{ } − ⎢ 𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎥ + 𝐷 𝑉 +𝐷 𝑉
⎢ 𝜕𝑧 𝜕𝑥 ⎥ 𝐷 (𝑉 ) + 𝐷 (𝑉 )
⎢𝜕 𝐷 (𝑉 ) +
𝜕
𝐷 𝑉 ⎥
⎣𝜕𝑥 𝜕𝑦 ⎦
(57)
𝜕 𝜕
⎡ 𝐷 𝑉 − 𝐷 (𝑉 ) + 𝐷 (𝑉 ) − 𝐷 (𝑉 ) ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥
⎢𝜕 𝜕 ⎥
= ∇ × 𝑃{ } − ⎢ 𝐷 (𝑉 ) − 𝐷 𝑉 + 𝐷 (𝑉 ) − 𝐷 𝑉 ⎥
⎢𝜕𝑧 𝜕𝑥 ⎥
⎢𝜕 𝐷 (𝑉 ) − 𝐷 (𝑉 ) +
𝜕
𝐷 𝑉 − 𝐷 (𝑉 ) ⎥
⎣ 𝜕𝑥 𝜕𝑦 ⎦
𝐷 (𝑉 ) − 𝐷 𝑉
{ }
= ∇×𝑃 −∇× 𝐷 (𝑉 ) − 𝐷 (𝑉 )
𝐷 𝑉 − 𝐷 (𝑉 )

By (49), we have

𝐷 (𝑉 ) − 𝐷 𝑉
{ } { } { }
∇ ×𝑉 =∇×∇ ×𝑉 =∇× 𝑃 − 𝐷 (𝑉 ) − 𝐷 (𝑉 )
𝐷 𝑉 − 𝐷 (𝑉 )

It is the same as (57). So, (51) holds for 𝑘 = 1.

So, (49) and (51) hold for 𝑘 = 1.

Suppose for a 𝑘 ≥ 1, (49) and (51) hold. Consider the case of 𝑘 + 1.

For 𝑘 + 1, (50) becomes

𝜕 ( ) 𝜕 ( )
⎡ 𝐷 (𝑉 ) − 𝐷 𝑉 ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥
⎢𝜕 ( ) 𝜕 ( ) ⎥
𝑃{ ( ) }
= ∇ × 𝑃{ ( )}
+ (−1) ⎢ 𝜕𝑧 𝐷 (𝑉 ) − 𝐷 (𝑉 ) ⎥ (58)
𝜕𝑥
⎢ ⎥
⎢𝜕 𝐷 ( )
𝑉 −
𝜕
𝐷
( )
(𝑉 ) ⎥
⎣𝜕𝑥 𝜕𝑦 ⎦

Because (51) holds for 𝑘 ≥ 1, we have


( ) ( )
⎡𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎤
∇ { ( ) }
× 𝑉 = ∇ × (∇ { ( )}
× 𝑉) = ∇ × 𝑃 { ( )}
+ (−1) ⎢𝐷 ( )
𝑉 +𝐷
( )
𝑉 ⎥
⎢ ( ) ( )

⎣𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎦

( ) ( )
⎡𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎤
= ∇×𝑃 { ( )}
+ (−1) ∇ × ⎢𝐷 ( )
𝑉 +𝐷
( )
𝑉 ⎥
⎢ ( ) ( )

⎣𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎦

𝜕 ( ) ( ) 𝜕 ( ) ( )
⎡ 𝐷 (𝑉 ) + 𝐷 (𝑉 ) − 𝐷 𝑉 +𝐷 𝑉 ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥
⎢𝜕 ( ) ( ) 𝜕 ( ) ( ) ⎥
= ∇ × 𝑃{ ( )}
+ (−1) ⎢ 𝜕𝑧 𝐷 (𝑉 ) + 𝐷 (𝑉 ) − 𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎥
⎢ 𝜕𝑥 ⎥
⎢𝜕 𝐷 ( )
𝑉 +𝐷
( )
𝑉 −
𝜕
𝐷
( )
(𝑉 ) + 𝐷 ( )
(𝑉 ) ⎥
⎣𝜕𝑥 𝜕𝑦 ⎦

𝜕 ( ) 𝜕 ( ) 𝜕 ( ) 𝜕 ( )
⎡ 𝐷 (𝑉 ) − 𝐷 𝑉 ⎤ ⎡ 𝐷 (𝑉 ) − 𝐷 𝑉 ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥ ⎢ 𝜕𝑦 𝜕𝑧 ⎥
⎢𝜕 ( ) 𝜕 ( ) ⎥ ⎢𝜕 ( ) 𝜕 ( ) ⎥
= ∇ × 𝑃{ ( )}
+ (−1) ⎢ 𝜕𝑧 𝐷 (𝑉 ) − 𝐷 (𝑉 ) ⎥ + (−1) ⎢ 𝜕𝑧 𝐷 (𝑉 ) − 𝐷 (𝑉 ) ⎥
⎢ 𝜕𝑥 ⎥ ⎢ 𝜕𝑥 ⎥
⎢𝜕 𝐷 ( )
𝑉 −
𝜕
𝐷
( )
(𝑉 ) ⎥ ⎢𝜕 𝐷 ( )
𝑉 −
𝜕
𝐷
( )
(𝑉 ) ⎥
⎣𝜕𝑥 𝜕𝑦 ⎦ ⎣𝜕𝑥 𝜕𝑦 ⎦

We get

𝜕 ( ) 𝜕 ( )
⎡ 𝐷 (𝑉 ) − 𝐷 𝑉 ⎤
⎢ 𝜕𝑦 𝜕𝑧 ( ) ( )
⎥ 𝐷 (𝑉 ) − 𝐷 𝑉
⎢𝜕 𝜕 ⎥
∇ { ( ) }
×𝑉 =∇×𝑃 { ( )}
+ (−1) ⎢ 𝜕𝑧 𝐷
( )
(𝑉 ) −
𝜕𝑥
𝐷
( )
(𝑉 ) ⎥ + (−1) 𝐷
( )
(𝑉 ) − 𝐷 ( )
(𝑉 ) (59)
⎢ ⎥ 𝐷
( )
𝑉 −𝐷
( )
(𝑉 )
⎢𝜕 𝐷 ( )
𝑉 −
𝜕
𝐷
( )
(𝑉 ) ⎥
⎣𝜕𝑥 𝜕𝑦 ⎦

Insert (58) into (59), we have

( ) ( )
⎡𝐷 (𝑉 ) − 𝐷 𝑉 ⎤
( ) ( )
∇{ ( ) }
× 𝑉 = 𝑃{ ( ) }
+ (−1) ⎢𝐷 (𝑉 ) − 𝐷 (𝑉 ) ⎥ (60)
⎢ ( ) ( )

⎣𝐷 𝑉 −𝐷 (𝑉 )⎦

For 𝑘 + 1, (49) becomes

( ) ( )
⎡𝐷 (𝑉 ) − 𝐷 𝑉 ⎤
{ ( } { ( } ⎢𝐷 ( ) ( )
∇ )
×𝑉 =𝑃 )
+ (−1) (𝑉 ) − 𝐷 (𝑉 ) ⎥
⎢ ( ) ( )

⎣𝐷 𝑉 −𝐷 (𝑉 )⎦

It is the same as (60). So, (49) holds for 𝑘 + 1.

For 𝑘 + 1, (52) becomes


𝜕 ( ) 𝜕 ( )
⎡ 𝐷 𝑉 𝐷 + (𝑉 ) ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥
⎢𝜕 ( ) 𝜕 ( ) ⎥
𝑃{ ( )}
= ∇ × 𝑃{ ( ) }
+ (−1) ⎢ 𝜕𝑧 𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎥ (61)
𝜕𝑥
⎢ ⎥
⎢𝜕 𝐷 ( )
(𝑉 ) +
𝜕
𝐷
( )
𝑉 ⎥
⎣𝜕𝑥 𝜕𝑦 ⎦

Because for 𝑘 + 1, (49) holds, we have


( ) ( )
𝐷 (𝑉 ) − 𝐷 𝑉
{ ( )} { ( ) } { ( ) } ( ) ( )
∇ ×𝑉 =∇×∇ ×𝑉 =∇× 𝑃 + (−1) 𝐷 (𝑉 ) − 𝐷 (𝑉 )
( ) ( )
𝐷 𝑉 −𝐷 (𝑉 )

( ) ( )
𝐷 (𝑉 ) − 𝐷 𝑉
( ) ( ) ( )
= ∇ × 𝑃{ }
+ (−1) ∇× 𝐷 (𝑉 ) − 𝐷 (𝑉 )
( ) ( )
𝐷 𝑉 −𝐷 (𝑉 )

𝜕 ( ) ( ) 𝜕 ( ) ( )
⎡ 𝐷 𝑉 −𝐷 (𝑉 ) − 𝐷 (𝑉 ) − 𝐷 (𝑉 ) ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥
( ) ⎢𝜕 ( ) ( ) 𝜕 ( ) ( ) ⎥
= ∇ × 𝑃{ }
+ (−1) ⎢𝜕𝑧 𝐷 (𝑉 ) − 𝐷 𝑉 − 𝐷 𝑉 −𝐷 (𝑉 ) ⎥
⎢ 𝜕𝑥 ⎥
⎢𝜕 𝐷 ( )
(𝑉 ) − 𝐷 ( )
(𝑉 ) −
𝜕
𝐷
( )
(𝑉 ) − 𝐷 ( )
𝑉 ⎥
⎣ 𝜕𝑥 𝜕𝑦 ⎦

𝜕 ( ) 𝜕 ( ) 𝜕 ( ) 𝜕 ( )
⎡ 𝐷 𝑉 𝐷 + (𝑉 ) ⎤ ⎡ 𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥ ⎢ 𝜕𝑦 𝜕𝑧 ⎥
( ) ⎢𝜕 ( ) 𝜕 ( ) ⎥ ⎢𝜕 ( ) 𝜕 ( ) ⎥
= ∇ × 𝑃{ }
+ (−1) ⎢ 𝜕𝑧 𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎥ + (−1) ⎢𝜕𝑧 𝐷 𝑉 + 𝐷 𝑉 ⎥
⎢ 𝜕𝑥 ⎥ ⎢ 𝜕𝑥 ⎥
⎢𝜕 𝐷 ( )
(𝑉 ) +
𝜕
𝐷
( )
𝑉 ⎥ ⎢𝜕 𝐷 ( )
(𝑉 ) +
𝜕
𝐷
( )
(𝑉 ) ⎥
⎣𝜕𝑥 𝜕𝑦 ⎦ ⎣ 𝜕𝑥 𝜕𝑦 ⎦

We get

𝜕 ( )𝜕 ( )
⎡ 𝐷 𝐷 𝑉 + (𝑉 ) ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥
( )} ( ) ⎢𝜕 ( ) 𝜕 ( ) ⎥
∇{ × 𝑉 = ∇ × 𝑃{ }
+ (−1) ⎢ 𝜕𝑧 𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎥
⎢ 𝜕𝑥 ⎥
⎢𝜕 𝐷 ( )
(𝑉 ) +
𝜕
𝐷
( )
𝑉 ⎥
⎣𝜕𝑥 𝜕𝑦 ⎦
𝜕 ( ) 𝜕 ( )
(62)
⎡ 𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎤
⎢ 𝜕𝑦 𝜕𝑧 ⎥
⎢𝜕 ( ) 𝜕 ( ) ⎥
+ (−1) ⎢𝜕𝑧 𝐷 𝑉 + 𝐷 𝑉 ⎥
⎢ 𝜕𝑥 ⎥
⎢𝜕 𝐷 ( )
(𝑉 ) +
𝜕
𝐷
( )
(𝑉 ) ⎥
⎣ 𝜕𝑥 𝜕𝑦 ⎦

Insert (61) into (62), we have

( ) ( )
⎡𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎤
⎢𝐷 ( ) ( )
∇ { ( )}
×𝑉 = 𝑃 { ( )}
+ (−1) 𝑉 +𝐷 𝑉 ⎥ (63)
⎢ ( ) ( )

⎣𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎦
For 𝑘 + 1, (51) becomes

( ) ( )
⎡𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎤
⎢𝐷 ( ) ( )
∇{ ( )}
× 𝑉 = 𝑃{ ( )}
+ (−1) 𝑉 +𝐷 𝑉 ⎥
⎢ ( ) ( )

⎣𝐷 (𝑉 ) + 𝐷 (𝑉 ) ⎦

It is the same as (63). So, (51) holds for 𝑘 + 1.

Thus, (49) and (51) hold for 𝑘 + 1.

QED.

You might also like