You are on page 1of 24

416

IEEE TRANSACTIONS

ON MICROWAVE

THEORY AND TECHNIQUES,

VOL. MTT-17, NO. 8, AUGUST 1969

Computation

of Electromagnetic
ALVIN WEXLER,
MEMBER, IEEE

Fields

Invited
AbstractThis

Paper
A moments minants reflection will show that calculating all their cofactors these deteralong each Each of in

paper reviews some of the more useful, current and


fields. It to numerical methods in general, including

newly developing methods for the solution of electromagnetic begins with an introduction specific references to the mathematical iterative means, the matrix

by evaluating

is an insurmount-

able task when the order is not small. Expanding row, n cofactors, these has cofactors

tools required for field analysis, difference differ-

each of order n 1, are involved.

e.g., solution of systems of simultaneous linear equations by direct and eigenvulue problem, finite entiation and integration, of boundary and initial matical principles error estimates, and common types of boundary value problems. The paper reviews the mathemethods, from the Hilbert space problems. The signill-

of order n 2 and so on. Continuing

this way, we find that at least n! multiplications

are needed

conditions. This is followed by a description of finite difference solution behind variational

to expand the determinant. Neglecting all other operations, and assuming that the computer does one million multiplications per second, we find that it would take more years than the universe is old to expand a determinant of order 24. Even then, due to roundoff errors, the reliability of the result will be questionable. Cramers rule then is surely not a useful algorithm except for the smallest systems. It is certainly numerical to the of the greatest importance rems but is virtually Wilkinson, in in general proofs and existence theouseless for getting referring answers. algebraic

point of view, for both eigenvalue and deterministic approach for determining with an introduction tion. the minimizing

cance of natural boundary conditions is pointed out. The Rayleigh-Ritz sequence is explained, followed by a brief description of the finite element method. The paper concludes to the techniques and importance of hybrid computa-

1. INTRODUCTION

a statement

to solve a field quantity, one must be concerned with numerical analysis, Other than those engineers involved exclusively in measurements or in the proof of general existence theorems or in administration, the remainder of us are numerical analysts to a degree. Whenever we prescribe a sequence of mathematical operations, we are designing an algorithm. ical theory, A theory is to us a means of extrapolating A mathematnumbers and a problem, produces our experiences in order to make predictions. of a physical good theory produces accurate numbers easily.

HENEVER

ONE devises a mathematical

expression

eigenvalue problem [25, Preface] said. . . the problem has a deceptively simple formulation and the background theory has been known for many years; yet the determination of accurate solutions presents a wide variety of challenging problems. He would likely agree that the statement applies equally to most of numerical analysis. This paper is concerned with the solution of fields by efficient, current and newly developing, computer practices. It is intended with numerical as an introduction analysis, for those totally unfamiliar some exmapping, many topics as well as for those with numerical conformal

perience in the field. For the sake of consistency, had to be deleted including

To obtain these numbers, we must employ processes that produce the required accuracy in a finite number of steps performed in a finite time upon machines having finite word length and store. Accepting these constraints we ask how best to perform under them. As far as engineers difference between are concerned, and there is no principal numerical approaches. analytical

point matching, mode matching and many others. Proponents of these techniques should not feel slighted as the author has himself been involved in one of them. In addition, it was felt advisable cal methods relevant to review particular special issue, furnish to concentrate on general numerito the microwave engineer, rather than problems. Companion papers, in this many specific examples.

Both are concerned with numbers. Oftentimes, the so-called analytical approachas though to claim inefficiency as a virtueis none other than the algorithmically impotent one. The contrast between efficient and inefficient computation is emphasized whenever anything beyond the most trivial work is performed. As an example, consider the solution of a set of simultaneous, linear equations by Cramers rule in which each unknown is found as a ratio of two determinants.

The principal methods surveyed are those of finite differences, variational, and hybrid computer techniques. Appropriate mathematical concepts and notations are introduced as an aid to further study of the literature. Some of the references, found to be the most useful for the author, are classified and listed in the bibliography. II. SYSTEMSOF LINEAR EQuArlom and solution solution of large sets of linear, These linapproxima-

The manipulation Manuscript received February 27, 1969; revised May 7, 1969. This work was carried out with financial assistancefrom Atomic Energy of Canada Ltd. and the National Research Council of Canada. The author is with the Numerical Applications Group, Electrical Engineering Department, University of Manitoba, Winnipeg, Man., Canada. simultaneous employed ear equations in computer

equations

is basic to most

of the techniques

of field problems. of certain

are the consequence

tions made to expedite mation of the operator

the solution. For example, approxi(Vz, often) over a finite set of points

WEXLER

: COMPUTATION

OF ELECTROMAGNETIC

FIELDS

417

at which the field is to be computed or approximation methods) equations. matrices, of the field causes the problem These equations

(as in finite differences), (as in variational by simultaneous represented by

estimate

The first equation, which was used to clear away the first column, is known as the pivot equation. The coefficient of the equation, equation equation which was responsible is made the pivot. to the last, we obtain XI +4X2+ X3= 7 6 for this clearing operation, is termed the pivot. Now, the first term of the altered second Adding 4.5 times the second

to be modeled are conveniently approach

It happens, and we shall see why in succeeding seccreates huge, sparse a tenth of one pertend to pro-

tions, that the finite difference are very few nonzero

matrices of order 5000 to 20000. Sparseness means that there elements, perhaps cent. On the other hand, variational duce matrices that are small (typically methods 2x 2 2X: =

of order 50), but dense is of prime impor-

9X3 = 18. Clearly, in the computer there is no need to store the equations as in the preceding sets. In practice, only the numerical values are stored in matrix form with one vector reserved for the as yet unknown variables. When the above procedure completed, the matrix is said to have been triangularized. is

by comparison. The size and density of these matrices

tance in determining how they should be solved. Direct methods requiring storage of all matrix elements are necessarily limited to about 150 unknowns in a reasonably large modern machine. On the other hand, it turns out that iterative for sparse matrices. In of nonzero elements are there is no In along in its entirety. produced

techniques often converge swiftly addition, if the values and locations easily computed point in storing

The final step, back substitution, is now initiated. Using the last equation of the last set, it is a simple matter to solve for X8= 2. Substituting second equation, the first equation, the numerical value of X3 into the Finally, from all X2= 1 is then easily found.

(as occurs with finite differences) the large sparse matrix vector elements (the number

such cases, only the solution with five nonzero

need be stored

we get xl = 5. And so on back through

by many

finite difference schemes) at any stage of the iterative process. This maneuver allows systems having upto20000 unknowns to be solved. This section deals with techniques for the solution examples of direct and iterative linear of systems of simultaneous

the equations in the set, each unknown is computed, one at a time, whatever the number of equations involved. It turns out that only n3/3 multiplications are required to solve a system of n real, linear equations. figure and should be compared of work involved in applying Cramers This is a reasonable amount In addirule directly. with the phenomenal

equations. This is followed by a method for solving the eigenvalue problem in which the entire matrix must be stored but all eigenvectors and eigenvalues are found at once. Solution of the eigenvalue problem, by an iterative technique, is reserved for Section IV where it is more appropriate. A. Solution by Direct Methods algorithms for the solution (or triangularizution) very of

tion, the method is easy to program. This simplified account glosses over certain

difficulties

One of the most popular systems of linear equations It consists and back substitution.

that may occasionally occur. If one of the pivots is zero, it is impossible to employ it in clearing out a column. The cure is to rearrange the sequence of equations such that the pivot is nonzero. One can appreciate that even if the pivot is nonzero, but is very small by comparison with other numbers in its column, numerical problems will cause error in the solution. This is due to the fact that most numbers exactly in a limited word-length ple, using integers only, is exceptional. on the make of machine, computers cannot be held examwith store. The previous Typically, can hold numbers

is known as the method

of Gauss.

of two partselimination The procedure

is, in principle,

depending

simple and is best illustrated

by means of an example.

We wish to solve the system Ax=b where A is an nX n matrix, x and b are n-element (1) vectors

(column matrices). b is known and x is to be computed. Consider, for convenience, a small set of equations in three unknowns xl, x2, and x3, ZI+4XZ+ ZI+6XZ 2XI X3= x3=13 ~. 7

7 to 11 significant digits in single precisioti. To illustrate the sort of error that can occur, consider the example, from [15, p. 34]. Imagine that our computer can hold only three significant digits in j?oating point form. It does this by storing the three significant digits along with an exponent of the base 10. The equations
1.00 x

1O-4XI + 1.00Z, = 1.00 1.00X, +


1.0022 = 2.00

are to be solved. Triangularizing, the word-length 1.00 x limitation, + 1.00 x Solving

as before,

while realizing

Xx + 2x3 =

we obtain 1 .Oox, = 10% 1.00 104. result

This example is from [9, pp. 187188]. Subtract the first equation from the second of the set. Then subtract twice the first from the last. This results in XI+4ZZ+ X3= 7 6

lo%,

= 1.00 x

for X2 then back-substituting,

the computed

2X2 2X3 =

is x.2= 1.00 and xl = 0.00 which is quite incorrect. By the simple of reversing the order of the equations, we find . expedient . the result m= 1.00 and xl= 1.00 which is correct to three

9*z + 0Z3 = 9.

418

IEEE

TRANSACTIONS

ON MICROWAVE

THEORY

AND

TECHNIQUES,

AUGUST

1969

significant putation,

digits. The rule then is, at any stage of the comto select the largest numerical value in the relevant

designed for real numbers cedure is undesirable required. Other inversion

can then be used. The latter protime is depend

as more store and computing exist, many of which

column as the pivot and to interchange the two equations accordingly. This procedure is known as a partial pivoting strategy. Complete pivoting strategy involves the interchange of columns as well as rows in order to use the numerically largest number available in the remaining equations as the pivot. In practice, time. example suffered from a pure scaling probstrategy, On the other nothing can be the results due to this further the additional refinement and do not appear to warrant computing The previous hand, complications

algorithms

upon particular characteristics of the matrix involved. In particular, when a matrix is symmetric, the symmetric Cholesky method [3, pp. 7678 and 9597] appears to be the speediest, perhaps twice as fast as Gausss method. the above condition, we can write
A=LE (2)

Under

lem and is easily handled by a pivoting except for the eigenvalue

where the tilde denotes transposition. matrix, i.e., nonzero as in 111 0 L=


121 [1 1,1 122 0

L is a lower triangular and below

problem,

elements

occur only along

done about solving a system of linear equations whose coefficient matrix is singular. In the two dimensional case, singularity (i.e. the vanishing of the coefficient matrix determinant) means that the two lines are parallel. Hence, no solution exists. A similar situation occurs with parallel planes or hyperplanes well-conditioned when the order is greater than two. A that intersect is relatively system describes hyperplanes point (or solution)

the diagonal

(3)

132 133 by successively

The elements of L may be easily determined employing all the following equations:

at nearly 90. The intersection insensitive to roundoff

error as a consequence. If hyperplanes

a2.2= 1212+ 1222, = 1112, a12 = 111121, a13 = 111131,


+ aaa = 121131 122132j a33 =
1312 + 1322 + 1332.

intersect at small angles, roundoff error causes appreciable motion of the intersection point with a low degree of trust in the solution. Such systems are termed ill-conditioned. It is unnecessary, and wasteful, to compute the inverse AI in order to solve (l). Occasionally, however, a matrix must be inverted. This is reasonably easy to accomplish. One of the simplest methods is to triangularize A, as previously derightscribed, and then to solve (1) n times using a different first element with zeros in the remainder. appear in the second location

(4)

Thus, the first equation of (4) gives 111explicitly. Using 111, the second equation supplies 121,and so on. This operation is known as triangular decomposition. The inverse of A is
A-l = ~-1~-l = (L:l)L-I

(5) multiplicaeasy

hand side b each time. In the first case b should have 1 in its The 1 should then gives one colfor the next back-substitution

which tion.

requires

the inverse of L and one matrix matrix

The inverse of a triangular by the sequences lIIZII


231Z11

is particularly

sequence, and so on. Each time, the solution

to obtain

umn of the inverted matrix and n back-substitutions produce the entire inverted matrix. See [12, pp. 3233 ]. The procedure is not quite so simple when, as is usually advisable, a pivoting strategy is employed and when one wishes to triangularize A only once for all n back-substitutions. IBM supplies a program called MINV which does this, with a a complete pivoting strategy, in an efficient fashion. Westlake [24, pp. 106-107] reports results of inverting matrix of order n= 50 on CDC 6600 and CDC

= 1,
+ Z32$Z1

121X11 + 122221 = o,
+

Z33X21 = o,
133X33 =

122X22= 1, 1, is most arithmetic

(6)

l,,x,, In this way,

+ 1,3X32= o, a symmetric

matrix

economically may have to A

inverted. From (4), it is clear that complex be performed. However, if in addition

1604A mac-

to being symmetric

hines. Computing times were approximately 0.3 and 17 seconds, respectively. These figures are about double that predicted by accounting only for basic arithmetic operations. The increased time is due to other computer operations involved and is some function of how well the program was written. 360/65 Tests run on the University inverts of Manitoba IBM in showed that MINV a matrix of order 50 in numbers

is positive definite as well, we are assured [3, p. 78] that only real arithmetic is required. In addition, the algorithm is extremely stable. B. Solution Iterative by Iterative methods Methods to the direct methith equa-

offer an alternative described.

ods of solution

previously

As a typical

3.38 seconds. The matrix the range O to 10.

elements were random

tion of the system (l), we have &


j=l

Note that the determinant of a triangular matrix is simply the product of the diagonal terms. The elimination and back-substitution method is easily adapted to the solution of simultaneous, linear, complex equations if the computer has a complex arithmetic facility, as most modern machines do. Failing this, the equations can be separated into real and imaginary parts and a program

a,,xj = b<.

(7)

Rearranging

for the ith unknown .i=:_~5.j. j=l ait j#i (8)

WEXLER

: COMPUTATION

OF ELECTROMAGNETIC

FIELDS

419

This gives one unknown cessively correct Different compute revised them,

in terms of the others. The intention M and then sucabove. example, one at a time, as indicated

&is

known

as the SOR iteration procedure

matrix

and plays an impor-

is to be able to make a guess at all variables strategies are available. completion

tant role in convergence The iterative

properties

of the method.

can be rewritten

One can, for

estimates

of all xi using

the previously where c is the last term literally of (12). Of course, one does not set up &, in order to perform the SOR. process, but

assumed values. Upon

of the scan of all equa-

tions, the old values are then overwritten by the new ones. Intuitively, it seems reasonable to use an updated estimate, just as soon as required, rather than to store it until the equation scan is completed, With the understanding that variables are immediately overwritten in computation, we have

& and (14) express mathematically what happens when the algorithm described by (10) is implemented. The process must be stationary when the solution is attained, i.e., x(~+l) = x(~) =x. Therefore, we must have that c = (1 Substitute

(15) into (14) and rearrange X(m+l)

$.)x. x==
J30(xbfd x). the form is
= ~(m) ~.

(15)

to obtain (16)

with

1S i< n, m> O. m denotes the iteration indicates this method, convergence

count. The twoare newly upneed be rapid Therefore These terms are clearly error vectors as they consist of elements giving the difference ues. At the mth iteration,
~(m)

part summation dated. With available system), and

that some variables

only n storage locations to the solution

between exact and computed

val-

for the xl (in comparison

with 2n by the previous is more

(17)

[22, p. 71]. The difference between any two successive xi values corresponds to a correction term to be applied in updating the current estimate. In order to speed convergence, it is possible to overcorrect at each stage by a factor w Q usually lies between 1 and 2 and is often altered between successive scans of the equation set in an attempt form is to maximize the convergence rate. The iteration

(16) becomes
~(m+l) = &@8 (m) = &@28(m1)

. ..= due to the recursive with exponentiation. any only than of a definition

(18)
&m+18(o)

of (16). It is therefore

clear

that the error tends to vanish when matrix& It can be proved

tends to vanish and less case We

[22, pp. 1315 ] that

matrix A raised to a power r vanishes as r-+ co if if each eigenvalue of the matrix is of absolute value one. It is easy to demonstrate this for the special real, nonsymmetric matrix with distinct eigenvalues. be sure that & (which is not symmetric eigenvalues,

(lo)

cannot

in general) are linearly

has only distinct

but we will assume this for purerror vector & may of eigenvectors 1; of

poses of illustration.

In this case all eigenvectors

It turns if matrix

out that convergence A is symmetric

to the solution definite

is guaranteed [34, pp. 237

independent. Therefore, any arbitrary be expressed as a linear combination &@,i.e.,

and positive

238] and if O< w <2. If m<1 we have underrelaxation, and ovemelaxation if m>1. This procedure is known as successive overrelaxation (SOR), there being no point in underrelaxing. Surprisingly, (10) can be described as a simple matrix iterative procedure [22, pp. 5859 ]. By expansion, it is easy to prove that (D @)x@+l) = {(1 u)D + CJ}x(n) + cob (11)

~ = alll

+ a212 +

. .

d..

(19)

If the eigenvectors are known, the unknown ai may be found by solving a system of simultaneous linear equations. As the Ii are linearly independent, the square matrix consisting of elements of all the Ii has an inverse, and so a solution exist. Performing obtain the recursive operations defined by (18), we must

where D is a diagonal matrix consisting of the diagonal elements of A, E consists of the negative of all its elements beneath the diagonal with zeros elsewhere, and F is a matrix having the negative of those elements above the diagonal of ,4. Rearranging (11),
X(.+l) = (D _ ~~)1{( _ ~)~ + ~~]x(m)

(12) + (.o(D O.E)-%. Define the matrix &. accompanying x @Jas CO)D+

where pi is the eigenvalue of.& corresponding to the eigenvector Ii. It is obvious, from (20), that if all eigenvalues of & are numerically less than unity, SOR will converge to the solution of(l). In the literature, the magnitude of the largest eigenvalue is known as the spectral radius p(&J. It is also easily shown that the displacement vector 5, which states the difference between mates, can replace s in (18>(20). two successive x esticondition, for

= @

6JE)-{(1

d].

(13)

A sufficient,

although

often overly stringent

420 the convergence

IEEE TRANSACTIONS of SOR is that the coefficient matrix A dis-

ON MICROWAVE ployed

THEORY AND TECHNIQUES, Therefore

AUGUST 1969

using only real arithmetic.

play diagonal dominance. This occurs if, in each row of A, the sum of the absolute values of all off-diagonal terms is greater than the absolute value of the diagonal term itself. In practice, we are concerned not only with whether or not convergence will occur, but also with its speed. Equation (20) indicates that the smaller the spectral radius of&, the more rapidly convergence occurs. As indicated by the subscript, .& and its eigenvalues are some function of the acceleration factor u. The relationship between coand the spectral radius P(SJ is a very complicated one and so only very rarely can the optimum acceleration factor uoPt be predicted in advance. However, some useful schemes exist for successively approximating [15], [17]-[20]). To conclude, the main advantage of SOR is that it is unnecessary to store zero elements of the square matrix as is required by many direct methods. This is of prime importance for the solution of difference equations which require only the vector to be stored. Also, the iteration procedure tends to be self-correcting and so roundoff errors are somewhat restricted. There are direct methods that economize on empty matrix elements, but SOR is perhaps the easiest way to accomplish this end. C. The Matrix Eigenvalue Problem eigenvalue problem is of the form (21) If T consists of real elements, it is orthogonal the squares of the elements of each column if the sum of the products where A and B are square matrices. sults from a variational solution This is the form that redifferent columns is Cos + T= (see Section V). The proband another of corresponding vanishes. One example (c where y = Zx. We therefore Eigenvectors inverting obtain the required uoP~ as computation proceeds (see [14], where C = L-lAZ-l. Note that ~ equals C and so it is symmetric. Taking the determinant det (A hB) Since det (L) is nonzero, are those of C. Therefore, eigenvalue problem of both sides of (23), = det (L) det (C Al).

[23)

(24)

(25) of A

we can see that eigenvalues

instead of (21), we can solve the

M)y =
o
to the required Jacobis method.

(26)

(27)

eigenvalues by solving (26). x by A

y are easily transformed to

~ in (27).

Orthogonal matrices are basic matrix T is orthogonal if

The general matrix

if the sum of equals one and elements in two

(A AB)X = o

is the unit matrix

lem is to find eigenvalues x and associated eigenvectors x such that (21) holds. This represents a system of linear, homogeneous equations and so a solution can exist only if determinant of (A MI) vanishes. If A and B are known matrices, then a solution can exist only for values of A which make the determinant vanish. Eigenvectors can be found that correspond to this set of eigenvalues. It is not a practical proposition found to find the eigenvalues Such a procedure simply by assuming trial it is inefficient. A values and evaluating to vanish. the determinant each time until

sin

Cos +1
sin @

(29)

Matrix (29) can be inserted in an otherwise unit matrix such that the cos @terms occur along the diagonal. This is also an orthogonal matrix and we shall denote it T as well. Since the determinant of a product of several matrices equals the product that det (T)= of the determinants, we can see from (28) 1 or 1. Therefore = det (T(A M) ~)
(30)

is hopelessly

An algorithm for matrices that are small enough to be held entirely in the fast store (perhaps n = 100) is the Jacobi method for real, symmetric matrices. Although it is not a very efficient method, it is fairly easy to program and serves as an example of a class of methods relying upon symmetry and rotations to secure the solution of the eigensystem. If B= 1, the unit matrix, (21) becomes (A AI)X = o
(22)

det (A M)

= det (T.4 ~ U) and so the eigenvalues of A and TAT are the same. It is possible to so position (29) in any larger unit matrix, and to fix a value of @, such that the transformed matrix TAT has zeros in any chosen pair of symmetrically placed elements. Usually one chooses to do this to the pair of elevalues. In doing this, cerments having the largest absolute

which is the form obtained by finite difference methods. If A can be stored in the computer, and if A is symmetric as well, a method using matrix rotations would be used. Equation (21) may not be put into this form, with symmetry preserved, simply by premultiplying by Bl. The product of two symmetric matrices is not in general symmetric. However, if we know that B is symmetric and positive decomposition (as described in Section definite, triangular II-B) may be em-

tain other elements are altered as well. The procedure is repeated successively with the effect that all off-diagonal terms gradually vanish leaving a diagonal matrix. The elements of a diagonal matrix are the eigenvalues and so these are all given simultaneously. It is possible to calculate the maximum error of the eigenvalues at any stage of the process and so to terminate the operation when sufficient accuracy is guaranteed. The product of all the transformation matrices is

WEXLER

: COMPUTATION

OF ELECTROMAGNETIC

FIELDS

421

continuously eigenvectors Probably

computed.

The columns

of this matrix

are the

f(x)

of A. See [3, pp. 1091 12] and [7, pp. 129-131]. the most efficient procedure, form, in both storage and by Wilkin[23]. is by Walden It is described

time, is that due to Householder. son [26] and, in a very readable A method, described employing in Section IV, III.

SOR for large sparse matrices,

FINITE DIFFERENCES

The importance of finite differences lies in the ease with which many logically complicated operations and functions may be discretized. Operations are then performed not upon continuous functions but rather, approximately, in terms of values over a discrete point mation becomes increasingly set. It is hoped that as the dissmall, the approxiThe great advantage

I
fx.h fi.1 I I fx I x-h l-l x-h/2 $1 X I 1 X+h/Z fx* i+l x+h x

tance between points is made sufficiently accurate.

of this approach is that operations, such as differentiation and integration, may be reduced to simple arithmetic forms and can then be conveniently digital computation. In short, labor. A. Di&erentiation First of all, consider differentiation. often requires much logical method, subtlety tion. The numerical The analytic and algebraic approach innovain many notation, =f,. a programmed for automatic complexity is exchanged for

i-1/2

i +l/2

i+l

Fig. 1. Forward, backward, and central differences.

difference,

given by the slope of the chord passing through

fzfh and f.-,,


rivative

certainly appears to approximate the true demost closely. Usually, this is so. Forward and backformulas are required to compute derivatives difdata. The central being considered. is obtained from a

ward difference ference formula available Taylors

at extreme ends of series of tabulated is preferred on both sides of the point expansion at x+h.

on the other hand, is very direct implementation

and should be used if data are of (31}(33) This gives

and simple in principle.

However,

instances presents problems of specific kinds. Consistent with a frequent finite difference function~ point, evaluated at any x is often written~(x) =f,+~. distance h to the right f(x+k)

An idea of the accuracy At a Similarly, at x h

To the left we

have f~_~, f~_,~, etc. Alternatively, nodes (or pivotal points) are numbered i yielding function values f ~,f i+l$ f il, etc. It is understood that the distance between nodes is h and node i corresponds to a particular x. Refer to Fig. 1. The derivative f:= g/dx,
forward

Subtractingfz

from (34), then rearranging,

gives

at any specified x, may be approximated dl~erence formula ~, = .fz+h f.

by the

x
where the function a derivative. backward is evaluated

(31) at these two points. notion of expressed by the

h
explicitly

O(h) means that the leading correction

term deleted from this

forward difference derivative approximation is of order h. Similarly, from (35), the backward difference formula can be seen to be of order h as well. Subtracting (35) from (34), and rearranging, we obtain

This, for very small h, corresponds Equally, the derivative dl~erence formula

to our intuitive maybe

f:
(32) which These are, in general, not equal. The first, in our example, gives a low value and the second a high value. We can therefore expec+ the average value (33) to give a closer estimate. This expression is the central dzjference formula for the first derivative. In the figure, we see that the forward on alternate and backward differences give slopes of chords The central sides of the point being considered.

(f.+}

fz-,)/2h

~h2fJ + .-. + O(h)


formula (33) with

(37)
a leading

= (fm+h f.-h)/2h
is the central difference

error term or order hz. Assuming that derivatives are well behaved, we can consider that the h term is almost the sole source of error. We see that decreasing the interval results in higher accuracy, as would be expected. In addition, we see that the error decreases quadratically for the central difference derivative ward difference and only linearly ones. for the forward or back-

Only two of the three pivotal points shown in Fig. 1 were needed for evaluating the first derivative. Using the three available pivots, a second derivative maybe computed. Con-

422

IEEE

TRANSACTIONS

ON MICROWAVE

THEORY

AND

TECHNIQUES,

AUGUST

1969

sider the derivative at x+h/2. half the previous interval,

Using central differences,

with

.f~+hiz = (f~+~ ~S)/h. Similarly,

(38)

f;-h/z

= (.fz j#t)/ii

(39)
Y

I I ---f--i
h h 1 i,j-1

i,j+l

giving the values of the first derivative h apart. The second derivative

at two points distance

at x is then

I (40) = (fcc+h 3.+


in which (33) has been applied f.-it)/h2 Fig. 2. with (38) and (39) supplying cases as well. This is obvious as approximating polynomials if pivots an can be fitted to data not equispaced. appropriate derivative expression
(af~-~ x

Five point, finite difference operator.

two first derivative values. By adding (34) and (35), (40) may be derived with the additional information that the error is of order hz. Another differentiation point of view for understanding finite difference is the following. f(z) is passed through For simplicity we easily find that a = (f~ 2~0 + fh) /2h2, b = (fh f-h)/2k and c = fo. (44) (42) Assume that a parabola (41) and fz+h of Fig. 1.

For example,

exist at x h, x, and x+ ah (where a is a real number), is (1 + a)f~ + f~+~)

= az + bx + c

f:! =
which is identical have reduced

2
C& + l)h2

(46)

the three points f.h, f.,

let x= O. Evaluating

(41) at x= h, O, and h,

to (40) when a= 1. However, equations

if a# 1, (46) pivots

and other differentiation accuracy. The preceding,

for unsymmetric

and many other differentiation

expressions,

(43)

are given in [12, pp. 64-87]. For instance, differentiation involves the determination differentiation reason that points. Finally, the most important derivative

since numerical and subsequent there is no to pivotal for our

of an interpolating polynomial, the derivative need be restricted

Differentiating (41), then setting x= O, we find we get the same forms as (33) and (40) for the first and second derivatives. Thus, differentiation of functions specified by discrete data is performed by fitting a polynomial (either explicitly or implicitly) to the data and then differentiating the polynomial. It is clear then that an nth derivative of a function can be obtained only if at least n+ 1 data points are available. A word of warningthis cannot be pursued to very high orders if the data is not overly polynomial, points, may cumstances, order terms a numerical accurate. A high-order data made to fit a large number of approximate

operator

purposes is the Laplacian V2. In two dimensions V?. Acting upon a potential 4(x, y) we have

it becomes

(47)
where x and y are Cartesian subscript index convention, coordinate, the finite difference coordinates. Using and applying (40) representation a double for each

of (47) is

experience severe undulations. Under such cirthe derivative will be unreliable due to higher of the Taylor series. Also note that accuracy of differentiation cannot be indefinitely increased

by decreasing h, even if the function can be evaluated at any required point. Differentiation involves differences of numbers that are ahnost equal over small intervals. Due to this cause, roundoff lower limit error may become significant and so the word length. to h is set largely by the computer

this five point, finite difference operator Fig. 2 illustrates which is appropriate for equispaced data. Often it is necessary to space one or more of the nodes irregularly. This is done in the same fashion employed in the derivation of (46) [12, pp. 231-234]. Finite difference Laplacian operators than are the also available in many coordinate systems other Cartesian. For example, see [12, pp. 237-252]. B. Integration Numerical integration is used whenever a function cannot easily be integrated in closed form or when the function is described by discrete data. The principle behind the usual method is to fit a polynomial to several adjacent points and integrate the polynomial analytically. Refer to Fig. 3 in which the function f(x) is expressed by data at n equispaced nodes. The most obvious integration

Bearing in mind that high-order polynomials can cause trouble, some increase in accuracy is possible by using more than the minimum number of pivots required for a given derivative. For example, as an alternative to (40), the second derivative can be obtained from f~ = (f.-z~+ 16f.-~ 30f.+

16f.+hf.r+m)/12h

(45)

with an error of 0(h4). Not always are evenly spaced pivotal points available. Finite difference derivative operators are available for such

WEXLER f(x

: COMPUTATION

OF ELECTROMAGNETIC

FIELDS

423

error

\ \ I I !
1

i \ i \

/ /

i ,

\ \ \

/ /
/ t
\ \ 1

1 $ !

/)
\ / \ \\ \ \ \ ,r/ \\ \ \ total // ,/ ,! 11

i-1

i-1

i+l x

,Zdkcretizotion round-off. error . d error error / . ,/ ==


----------.;-. -------__...--mesh intervol h

Fig. 3,

Integration

under the curvej(x). Trapezoidal rule for a single

strip and Simpsons 1/3 rule for pairs of strips. formula results by assuming that the function~(x) consists of piecewise-linear sections. Each of these subareas is easily computed through

Fig. 4.

Error as a function

of mesh interval.

It is not possible to indefinitely reduce h with the expectation of increased accuracy. Although smaller intervals reduce the discretization error, the increased arithmetic causes larger total roundoff error error. for A point is reached where minimum algorithm using any occurs any particular

given word length. This is indicated in Fig. 4. Highly accurate numerical integration procedures where x; is the value of x at the ith pivot. found by summing all A;. Equation rule for approximating order hs. The trapezoidal to a large number The total area is vided Rather by Gauss quadrature formulas [7, pp. (49) is the trapezoidal than using predetermined pivot

are pro3 12367]. this

positions,

the area under one strip with error of (49) may be applied, sequentially,

formula

of strips.

In this way we obtain

method chooses them in order to minimize the error. As a result, it can only be used when the integrand is an explicit function. Multiple integration [12, pp. 198206] is, in theory, a simple extension of one-dimensional integration. In practice, beyond double or triple integration the procedure becomes very time consuming and cumbersome.

which has a remainder compared mulated by adding

of order h2. The decrease in accuracy, case, is due to error subareas. accu-

To integrate

with the single-strip

all the constituent * rule

A more accurate formula one) is Simpsons

(and perhaps the most popular over the specified (51)

v=
limits,

H
ac
h
= ~ (fi,j

bd

f.,uclxdy is subdivided

(53)
(see Fig.

the region

.5

5(a)). As one would expect, to perform a double integration by the trapezoidal rule, two applications of (49) are required for each elemental region. Integrating along x, over the element shown, we obtain (54)

which,

by using a parabola, of strips,

approximates (a, b)
is

the integral divided into

over an

two strips to 0(h5). If the interval even number turn. Therefore bf(z)dz e :

(51) can be applied

at each pair in
gj

.fi+l!J

J
n

(f, + 4fl + 2f2 + 4f3 + + 2fn-,

(52)

h 9j+l = ~
Ui!,+l + fi+l!j+l).

(55)

+ 4fn_, + f.)

with an error of 0(h4). Another Simpsons formula, known as the ~ rule (because the factor % appears in it), integrates groups of three strips with the same accuracy as the $ rule. Thus, the two Simpsons rules may be used together for an odd number of strips. to cater

It now remains to perform

the integration

By and large, integration is a more reliable process than differentiation, as the error in integrating an approximating polynomial tends to average out over the interval. which results from two applications of the trapezoidal rule,

424
i,j+l

IEEE
i+l, j+l

TRANSACTIONS

ON MICROWAVE

THEORY D irichlet boundary

AND

TECHNIQUES,

AUGUST

1969

~hff(x,y)dxdy n i ,j
(a)

&%:&
i+l, j

I mn

: P

o
\

rs t

W;f[f(xy)dxdy
----Neunmnn boundary %

w+
x

....

....

+-t .,

---.--%----~

kouchy boundary 3$

l:~----------q(s)(d+(dq(s)
value problem.

1
I

--
(b) (b) Simpsons

Fig. 6.

A mixed boundary

The Neumann

boundary

condition (59)

Fig. 5. Double integration molecules. (a) Trapezoidal


1/3 rule.

rule.

is also easy to interpret Similarly, V = ~ three applications of Simpsons ~ rule produce forced into the region, independent

physically. from

Imagine

current

to be

across the boundary, A large number

at a rate of constant

of the potential.

{ 16j;,j -1- 4(fi,j-1


+ .fi+l,i-l +

+ .ft+l,f+l

.ft+I,~ +

.fi,j+I

+ +

fi-1,~)

(57)
,fi-l,i+l J.t-1,+1 }

which gives the double integral off. ,Vover the elemental two dimensional region of Fig. 5(b). Molecules in the figure illustrate the algorithm, IV. FINITE DIFFERENCE SOLUTION OF

current sources, strung along the boundary, would simulate this effect. The normal, linear current flow density is in(s), the negative sign signifying a flow direction in a sense opposite to the unit normal. The surface resistivity r divided into the normal electric field strength at the boundary t@/&z \, in(s), i.e., (1/r)/(&j/&z) 1, = in(s). Another analogy is the flux emanating by a conductor. condition is written an open circuit from a distributed In general then, as in (59). Along boundary sheet of charge backed the Neumann an impermeable of concern boundary border, to us, is

PARTIAL DIFFERENTIAL EQUATIONS The finite difference technique is perhaps the most popular numerical method for the solution of ordinary and partial differential equations. In the first place, the differential equation is transformed into a difference equation by methods described in Section III. The approximate solution to the continuous problem is then found either by solving large systems of simultaneous linear equations for the deterministic problem or by solving the algebraic eigenvalue problem as outlined in Section II. From the point of view of fields, the resulting solution is then usually a potential function @ defined at a finite number of points rather than continuously over the region. A. Boundary Conditions

in Fig. 6, p(s)= O. condition,

The remaining

the Cauchy (or third) condition. Imagine that the border is a film offering resistance R to current flowing across it. Such a film occurs in heat transfer between, say, a metal and a fluid. A series of resistors strung out across the boundary would simulate such an effect. Let the potential just outside the conducting region be ~O(s), a function of position along the curve. The potential just inside is $(s) and so the linear current density transferred across is iJs) = (+(s) dO(s))/R where R is the film resistance. Since (l/r)/(d@/iIn)]. = i.(s), we have by eliminating i.(s), (R/r)/(E@/&z) 1,+d(s) = +.(s). In general, this is written
C3(j

-& s

+ a(s)cj(s)

= q(s).

(60)

The most frequently occurring boundary conditions are of several, very general forms. Consider, first of all, the DirichIet boundary condition defined by 0(s) = g(s) (58)

A harmonic wave function 4, propagating into an infinite region in the z direction, obeys &p/8z = jp~ or &$/dz+j@$ = O at any plane perpendicular to z. This is a homogeneous conditions case of (60). A region having two or more types of boundary is considered to constitute a mixed problem. B. D@erence Second-order Equations partial of the Elliptic differential Problem are conve-

which states the values of potential at all points or along any number of segments of the boundary. See Fig. 6 which represents a general two-dimensional region, part of it obeying (58). If we visualize this region as a sheet of resistive material, with surface resistivity r, the Dirichlet border is simply one maintained at a potential g(s). At ground potential, g(s) = O which condition. is the homogeneous Dirichlet boundary

equations

niently classified as elliptic, parabolic, and hyperbolic [12, pp. 190-191 ]. Under the elliptic class fall Laplaces, Poissons,

WEXLER

: COMPUTATION

OF ELECTROMAGNT3TIC

FIELDS

425

and the Helmholtz

partial

differential

equations. operators

By way of

4(s)

.-, =V

illustration, Fig. typical tion

we will now consider

the first and last ones. in somewhat

6 shows several five-point environments. equation

To be specific, consider the discretiza-

of Laplaces

V%j = o
consistent with the applied boundary conditions. From

(61) (48),

(61) becomes
+a+@%@c+$d+&=O.

(62) Fig. 7.
Finite difference mesh for solution of Laplaces equation under Dirichlet boundary conditions.

A reasonably fine mesh results in a majority of equations having the form of (62). The trouble occurs in writing finite difference expressions near the boundaries. this substitution, the finite Near the Dirichpoint expresdifference Iet wall we have Of= g(sl) where SI denotes a particular on s. Making sion about node h is +, 4r#J/, +@i+@lj=-gf where S1and nodef coincide. Along (59) must be satisfied. To simplify (63)

appropriate

schemes. See for example [37, pp. 36-42]. [27] a simple, although an arbitrary

[28],

(34, pp.

198

204] , [35, pp. 262-266], In reference method for fitting

somewhat

inaccurate,

shape is given. For simplicity,

the Neumann boundary, matters for illustration,

the boundary was permitted to deform, at each horizontal mesh level, to the nearest node point. A crude-mesh difference scheme, for the solution of Laplaces equation under Dirichlet shown in Fig. 7. The appropriate be written in matrix form boundary difference conditions, is equations can fashion:

the five-point operator is located along a flat portion of the wall. Using the central difference formula (33), (59) becomes .
410100 14 1 0 1 0

in the following

o 00 00
10 01 1 4 100 4 00

0-

2V v 2V

0
1
01 0010 000

14

o
o4 0141

0
1

1
0

o
v I
1

(66)

o 01

000010

1 4.

(4~&)/2h node 1 from Therefore

=p~.

This equation can then be used to eliminate the difference equation written about node m.
~~ @., + 26. + @o = 2hpm. (64) u,& = q,

or

A+ There are several significant

= b.

(67)

features about (66). In the first

Finally, at the Cauchy boundary, (4. 4J/2h+ which makes the node r difference equation & + 24, 4(1 + urhi~)or SOR is employed + @ = 2hg,.

place, there are a fairly large number of zeros. In a practical case, the mesh interval in Fig. 7 would be much smaller and the square matrix would then become very sparse indeed. In addition, the nonzero elements of any row of the square matrix and any element of the right-hand side vector maybe easily generated at will. It is clear therefore that SOR (or any similar iterative scheme) is the obvious equations. choice for solvstore may ing the system of linear All computer

(65) in terms

to solve for each node potential

of all other potentials in any given equation. As there are at most five potentials per equation, each node is therefore written in terms of its immediately adjacent potentials. In other words, it is the central node potential of each operator

then be reserved for the ~ vector and the program, thus allowing for a very fine mesh with consequent high accuracy. Certainly, a direct solution scheme, such as triangularization and back substitution, could not be considered feasible. Green [36] discusses many practical aspects and problems associated with the solution of Laplaces equation in TEM transmission lines. Seeger [42] and Green describe the form of the difference equations in multidielectric regions. Now, let Fig. 7 represent an arbitrarily shaped waveguide. For this purpose, the boundaries must be closed. The technique is to solve for a potential @ in finite difference form.

that is altered in the iterative process. As all points on a Dirichlet boundary must maintain fixed potentials, no operator is centered there. This does not apply to Neumann and Cauchy boundaries, however, and so potentials on these walls must be computed. It is no mean feat to program the logic for boundaries (particularly finite non-Dirichlet approach. ones) which The literature do not correspond drawbacks of the of gives a number to mesh lines, and this is one of the major difference

426 If z is along the axial direction, H. depending on whether TM

IEEE TRANSACTIONS z&+ is proportional to E, or

ON MICROWAVE a true eigenvalue.

THEORY AND TECHNIQUES, In general, the 1 estimate

AUGUST 1969 will be in error

or TE modes are being con-

sidered. For TM modes the boundary condition is the homogeneous Dirichlet one (58), i.e., with g(s)= O. Fields are then derived from

and so ~ cannot be found by SOR alone and an outer iteration employing the Rayleigh quotient (defined later) must be employed. Application of SOR to a homogeneous causes (14) to assume the form +(m+l) = &@,A+(m). The subscript A has been added to the iteration functional assume that with dependence. J&,l distinct matrix set of equations

Z=

+,4

c (68)
indicate a further matrix

(74) to

As in Section II-B, non@(~) of Then,

for illustration, symmetric) H,=O if E, is made eaual to d. Likewise, the homogeneous Neu. mann condition (59) (with p(s)= O) applies to TE modes, with fields obtainable from Iterating

is a real (generally eigenvalues.

may be expressed as a linear combination &,i, i.e., ~(m) = alll through + u,l, +

of eigenvectors

. . . + al..

(75)

(74)s times, we find that

+(m+.) = a w111 + awflz If KI is the eigenvalue Ez=O (69) then ifs having

-1- . -t- anp~1~.


absolute

(76)
value,

the greatest that

is large, we have substantially +(~+s) = alpls~l.

(77)

Hz = 4. k, is the cutoff wavenumber of discrete potentials, in Section III-A. Discretization and Y the propagation constant. indicated outlined

Equation (74) must represent a stationary process when SOR has converged and so ~, = 1 at the solution point. It is interesting to note that the eigenvector 21 of $ti,x is, or is proportional to, the required eigenvector is substituted into (71). Rewrite (71) as B$=O (78) of A when the correct k

Of course, one does not solve for@ but rather for +, a vector and so the differentiations by techniques equation by (68) and (69) must be performed of the Hehnholtz

where

(v, + k.)d = o
produces a matrix eigenvalue problem of the form

(70) B=.4 AI (79) with an assumed or computed A approximation. The convergence theorem [34, p. 240] states that if B is symmetric and positive semidefinite, with all diagonal terms greater than zero (which can always be arranged unless one of them vanishes), and if the correct whatever k is employed, (SOR with then the method to also converge b~j=bi,SO will not be of successive displacements the solution [34, pp. 260-262] u = 1) converges

(A AI)+

= o.

(71)

About a typical internal node such as 5 in Fig. 7, the finite difference form of (70) is
@z@4+ (4 A)@5@6 @8=0

(72)

@ is initially.

It will

where A = (l:ch) . (73)

for 0<0<2

if the elements estimate

(i#j) and b,,> O. As will usually happen,

an eigenvalue

Equations such as (72), suitably amended for boundary conditions, make up the set expressedby(71). Signs are changed in (72) to make, as is the frequent convention, the matrix positive semidefinite rather than negative semidefinite. Successive overrelaxation can be used to solve the matrix eigenvalue problem (71). In the first place a fairly crude mesh, with perhaps 50-100 nodes, is drawn over the guide cross section. A guess at the lowest eigenvalue (either an educated one or perhaps the result of a direct method on an even coarser mesh) is taken as a first approximation. The elements of vector @are set to some value, perhaps all unity. SOR is then initiated generating only one row at a time of A AZ as required. A nontrivial solution of (A A&$ can exist only if the determinant vanishes, i.e., the guess at k is

correct. If it deviates from the exact eigenvalue by a small amount we can expect PI, in (77), to be slightly greater than or less than unity. Therefore +(~+sl will grow or diminish slowly as SOR iteration proceeds, It cannot converge to a solution as B is nonsingular. (B is termed singular if its determinant vanishes.) However, the important point is that whether @tends to vanish or grow without limit, its elements tend to assume the correct relative values. In other words, the shape of 1$converges to the correct one. After several SOR iterations the approximate @ is substituted into the Rayleigh quotient [35, pp. 74-75] (80)

WEXLER

: COMPUTATION

OF ELECTROMAGNETIC

FIELDS

427

Equation (80), which depends only upon the shape of +, is stationary about the solution point. In other words, Z~ ~ is a reasonable duces superscripts estimate estimate to the eigenvector estimate. then (80) proThe bracketed from an improved eigenvalue

most convenient employ variational near boundaries. When the

way to guarantee methods Forsythe for

symmetric

matrices

is to

in deriving and Wasow the first

difference [34, pp. mode into is

equations 1821 84] obtained,

give a good account det (B)= O with

of this approach. A substituted (79). If one

give the number

of the successive eigenvalue context

solution

and are therefore

used in a different

the correct

that in (74). Using the new eigenvalue approximation, and returning to the SOR process with the most recent field estimate, a second and better estimate to @ is found, and so on until sufficient accuracy is obtained. Whether or not convergence has been achieved may be gauged firstly by observing the percentage change in two or more successive eigenvalue estimates. If the change is considered satisfactory, perhaps less than one-tenth (The norm of a column root within satisfactory of a percent, matrix then the displacement norm as a percentage of the vector norm should be inspected. is often defined as the square When this is

then wanted to solve for a higher mode, a new and greater A estimate would be used, If this differs from the first by a, this is equivalent Now, to subtracting a from all diagonal terms of B, if p is any eigenvalue B+ Subtracting aI from both of B, then = p~. (81)

sides, = (p a)+ of the new matrix (s~) (B al) are

(1? al)+ we see that all eigenvalues

of the sum of squares of all elements.) limits, must be compatible

shifted to the left by a units. Since B had a zero eigenvalue, at least one eigenvalue must now be negative. A symmetric matrix is positive definite if and only if all of its eigenvalues are positive nonnegative.
and the previous

the process may be terminated. in that one cannot

These requirements

expect the displacement norm to be very small if the eigenvalue estimate is very inaccurate. What constitutes sufficient stationarity of the process is largely a matter of practical experience with the particular problem at hand and no general rule can be given. This entire computing procedure, including a optimization, is described Moler in [34, pp. 375-376] out that no proof quotient that with a other condithe smaller and [38, pp. 114-129]. exists guaranteeing reasonable [38] points experience

[16, p. 105] and positive semidefinite if all are Therefore, (B aZ) is not positive semidefinite
theorem is violated. Consequently,

so the convergence

SOR scheme cannot higher than the first. Davies and Muihvyk of the SOR solution waveguides. fraction sonable containing Typical

be employed

for modes account were a

[32] published cutoff

an interesting accuracies

of several arbitrarily wavenumber This is an interesting Fields are often

shaped hollow result as reasingular near

convergence

with the Rayleigh indicates and with

in the outer loop. However, A estimate

of one percent. accuracy internal

to begin with,

was obtained corners.

even for those geometries

tions satisfied, we can be fairly confident. Generally, the higher the accuracy required,

the mesh interval and the larger the number of equations to be solved. The number of eigenvalues of Qti,x equals the order of the matrix and a large number together. of eigenvalues means It is therefore clear, eigenvalues that they are closely packed from (76), that if the dominant

such points. The finite difference approximation suffers because Taylors expansion is invalid at a singularity. If errors due to reentrant corners are excessive, there are several approaches available. The reader is referred to Motz [39] and Whiting is expanded Duncan [45] in which the field about a singularity series of circular finite difference harmonics. experimesh of a series of numerical operators, as a truncated different

and subdominant

of $. ,A(Y1 and wZ)are nearly equal, the process (74) will need a great number of iterations before the dominant eigenvector shape emerges. In fact, successive overrelaxation corrections could be so small that roundoff errors destroy the entire process and convergence never occurs. The answer is to start off with a crude mesh having, perhaps, one hundred nodes in the guide cross section. Solve that matrix eigenvalue problem, potentials, behind halve the mesh interval, preferably) continue and then interpolate for the newly the process. (quadratically defined The node in two dimensions, this approach

[33] gives results

ments employing

intervals, etc. An algorithm has recently been developed [27] which guarantees convergence by SOR iteration. The principle is to define a new matrix
C=~B

(83)
(78) becomes

C is symmetric

whether

or not B is. Equation


C+ =()

rational

is that each iterative

stage (other than which is solved by SOR. Note that det (C) = det (S) det (B) = (det (B))

(84)

the first) begins with a highly accurate field estimate and so few iterations are required for the fine meshes. For arbitrary boundaries, programming for mesh halving and interpolation can be an onerous chore. As one additional point, the effect of Neumann boundary conditions is to make B slightly nonsymmetric and so convergence of the SOR process cannot be guaranteed. This occasionally causes iteration to behave erratically, and sometimes fail, for coarse meshes in which the asymmetry is most pronounced. Otherwise, the behaviour of such almost symmetric matrices is similar to that of symmetric ones. The

(85)

and so (84) is satisfied by the same eigenvalues and eigenas (71) and (78). SOR is guaranteed to bc successful on (84) as C is positive semidefinite for any real B. Note that ix> O for any real column matrix x. Substitute the transformation x= By giving Y~By z YCY >0 which defines a positive definite matrix C. If det (C)= O, as happens at the solution point, then C is
vectors

positive

semidefinite

and so convergence

is guaranteed.

This

428

IEEE

TRANSACTIONS

ON MICROWAVE

THEORY

AND

TECHNIQUES,

AUGUST

1969

much is well known. The usefulness of the algorithm is that it describes a method of deriving the nonzero elements of one row at a time of C, as required by SOR, without recourse to Bin its entirety. It is shown that the gth row of C requires only the gth node point potential and those of twelve other nodes in its immediate vicinity. The operations are expressed in the form of a thirteen-point finite difference operator. Thus, because storage requirements positive modes. This method while generating requires considerably difference equations more logical decisions, than near boundaries, semidefinite, SOR are minimal, and C is for higher can be employed

which is parabolic. Note that if @ is time harmonic, (86) becomes the Helmholtz equation. If @ is constant in time, both becomes Laplaces equation. The solutions of partial differential equations (86) and (87) are the transient responses of associated physical problems. The solution of (86) gives the space-time response of a scalar wave function. Equation (87) governs the transient diffusion of charge in a semiconductor, conductor, [78, pp. 235-236]. of temperature, conductivity, heat flow through electrical constant. a thermal conductor or skin effect in an imperfect Kis the diffusion mobility, specific heat, and and electronic

It is a function depending

charge or thermal

mass density,

does the usual five-point operator. The process can be speeded up considerably by generating (and storing) these exceptional difference equations only once for each mesh size. In this way, the computer simply selects the appropriate equation for each node as required. In the internal region, difference ing them approach ber only equations would as hl are generated very quickly so that storfeasible as hz be wasteful. while This This is an entirely ones increase storage

upon the physical problem. For example, if a quantity of charge (or heat) is suddenly injected into a medium, the electric potential (or temperature) distribution is given by the solution of (87). The result o is a function of space and time, Such problems are more involved computationally than the elliptic problem is, due partly to the additional indepenat conposdent variable. The function and sufficient time derivatives arbitrary

because nodes near boundaries the internal boundary-node

increase in numprocedure difference problem if one atthe boundoften fails

t= O must be specified stants produced sible to determine

in order to eliminate

approximately.

by integration.

It is then theoretically

would likely be profitable for the five-point operator as well. The method can be adapted to the deterministic (67). Normally, this would not be required, but tempts higher order derivative approximations at ary, for the Neumann or Cauchy problem, SOR [37, pp. 5&53]. Recently, approach Because it guarantees is PDSOR. Silvester [29] and positive a suggested abbreviation Cermak whereby finite

Problems @for all t.

specified in this way

are known as initial-ualue problems. To be really correct, the partial differential equation furnishes us with a boundaryvalue, initial-value problem. The finite difference approach is to discretize all variables and to solve a boundary For simplicity, tion consider value problem at each time step. diffusion equathe one dimensional

definiteness, an

demonstrated

differences

can be used in an open

Wp ~K8t

1 alj .

(88)

region. An arbitrary boundary is drawn about the field of interest. The interior region is solved in the usual way and then the boundary values are altered iteratively, until the effect of the boundary vanishes. Then, the solution in the enclosed space corresponds to a finite part of the infinite region. Davies and Muilwyk in the solution tinuities. constant The method cross section [40] have employed waveguide is applicable along finite differences and disconhas a

To solve for 4(x, t), the initial value @(x, O) and boundary conditions, say, o(O, t)= O and (tl@/dx) I X=1= O are given. Discretization of (88) gives

~~1,.i 24~,1 +

4%+1,3

~t,~+l

~i,j

(89)

Kk respectively.

of certain

junctions

when the structure

where h and k are the space and time intervals Rather than the central difference formula

one coordinate.

If this is so,

for second deriva-

the ports are closed by conducting walls and their finite difference technique for arbitrarily shaped waveguides [32] may be used. A limitation is that the ports must be sufficiently close together so that one seeks only the first mode in the newly defined waveguide. Otherwise, SOR will fail as described previously. C. Parabolic and Hyperbolic Problems equations

tives, forward or backward differences must be used at the boundary pointsunless Dirichlet conditions prevail. The first of each subscript pair in (89) denotes the node number along x and the second denotes the time-step number. Therefore ~ = ~h.>
t = jk; Rearranging +t,j+l (89) = @i,j +
(~~-l.~ z+~,~ + ~~+1,~)

i=o,
j=o,

1,2,
1,2,

. . .
(90) . . . .

Prime examples of these classes of differential are furnished by the wave equation

(91)

(86) which is hyperbolic and the source-free diffusion equation

with r = Klc/hZ. In Fig. 8, the problem region, one dimension presents an explicit is visualized

(92)
as a two-dimensional This algorithm as each group of

(87)

t being unbounded.
of solution

method

WEXLER:

COMPUTATION

OF ELECTROMAGNETIC

FIELDS

429 detail, could

easily take 1000 years! There appears to be an and that is through hybrid computation,

answer,

however,

the subject of Section VI. D. Integral Equations to posing a problem in terms of partial

As an alternative

differential equations, it may be cast into the form of an integral equation. This approach is particularly useful for certain known antenna problems where the Greens function is in advance. Its efficacy is questionable in arbitrarily solution of the as the for each source point, problem is as difficult

I
L & L

shaped closed regions because the numerical Greens function, solution of the original function may

i-l,j

i,j

i+l,j

i ( =h

itself. The integral analytically

approach too

is therefore Greens

useful in many free-space studies, and when the be found without

much trouble.

o
Fig. 8.

+(X,O)
Finite dfierence mesh for exdicit of an initial value problem.

Insofar

as this section is concerned,

it is sufficient

to point

out that the finite difference approach thin, arbitrary antenna, the integral

can be used. For a

solution

three adjacent pivots can be used to predict the next time step. In this way, the solution time as long as required unacceptable. There is a stability be shown that the explicit

one potential is advanced

at in

gives the z component between

of vector potential.

R is the distance point, i.e., (94) the ap-

or until error accumulation

becomes It can

the source and the observation R= [rrl.

criterion

that must be satisfied.

method

with one space coordinate

If the antenna is excited by a source at a given point,

is valid only when O< r< ~. This restriction, in conjunction with (92), indicates that the increased amount of computing required for improved accuracy is considerable. If h is halved then k must be quartered. The stability criterion is still more stringent for problems having two space dimensions, solution constraint. as the Crank-Nicolson method, before advancing requiring that O< r< ~. The explicit is also subject to a stability Another approach, known requires the solution of the wave equation

proximate current distribution can be computed. This is accomplished by assuming I, to be constant, but unknown, over each of the n subintervals. The integration in (93) is performed with the trapezoidal rule, thus producing an equation in
n

unknowns.

Enforcing

the required linear

boundary

conditions, are found by (Higher

n equations solving

are produced

and so the unknowns

a set of simultaneous,

equations.

of all node potentials

order integration schemes may be used if the current distribution along each subinterval is presumed to be described by a polynomial.) potential With the current distribution known, the may be calculated at any point in space.

the time step. It is unconditionally stable and so does not require terribly fine time intervals. This advantage is partially offset, however, step must by the fact that all potentials as a system at each time linear be solved of simultaneous,

A good, descriptive introduction is furnished by [53]. In [47], [49] -[51], and [54] the solution of integral equations, in radiation and scattering Fox problems, (corresponding problem) through matrix and equamethods practical problem) is described. and Volterra [35] discusses mathematical integral

equations. Thus it is called an implicit method. The final result is that the implicit method is some three or four times faster than the explicit one. The reader will find very fine introductions in [12] and [44]. Three books, dealing to this subject with finite

aspects of Fredholm

to the elliptic

(initial-value

generally

differences and with special sections of interest here, are [3 I], [34], and [35]. In [30] and [41], initial value problems are discussed. Recently, Yee [46, pp. 302307] reported some results on transient electromagnetic propagation. It is disappointing to note that in spite of the great amount of work done on the subject, in practice the solution of many initial value problems exhausts the capabilities of modern digital machines. Problems having two spatial dimensions can easily take many hours to solve with moderate Forsythe and Wasow [34, pp. accuracy. one 1114] have estimated

tions. The quasi-TEM microstrip problem is dealt with in [48] and [52]. The major difficulty is the derivation of the Greens function; the numerical problem is insignificant by comparison. Variational methods (Section V) offer another to the solution of integral equations. V. VARIATIONAL METHODS This subject, although not terribly new, is becoming of certain difapproach

increasingly it is relatively

important

for several reasons. In the first place, the solution

easy to formulate

week for such a problem having 10000 nodes. Using a modern computer, the time would be reduced to perhaps onethird of that. A three-dimensional problem, solved in fine

ferential and integral equations in variational terms. Secondly, the method is very accurate and gives good results without making excessive demands upon computer store and

430

IEEE

TRANSACTIONS

ON MICROWAVE

THEORY

AND

TECHNIQUES,

AUGUST

1969

time. The solution is found by selecting a field which minimizes a certain integral. This integral is often proportional to the energy contained in the system and so the method embodies a close correspondence with the real world. The literature on variational methods is so scattered that here. them there is good reason to collate and review the principles It is hoped that by reviewing these ideas, and relating

The requirements ing to the function

we have placed upon functions belongspace is that they be twice differentiable

(at least) and that they satisfy the homogeneous Dirichlet condition ~(s) = O at the conducting walls. Such functions are considered au (where to belong to a linear set. By this is meant that u and v belong to the set, then u+ v and likewise belong to it. In other u+ v and au are also twice differentiable boundary conditions. if any two functions words, the functions An inner product

a is a constant)

to microwave problems, the engineer will be encouraged to make immediate and more general use of them. Otherwise, the initiate could well spend many months accumulating required information before being able to apply it. the

and satisfy the relevant

The following theory is concerned almost exclusively with the solution of scalar potentials. Obviously then, static fields are the immediate beneficiaries. In addition, time-varying fields, that may be derived from a single vector potential, are also easily catered for. Although there are some indications of how to proceed, the author has not seen any general computer methods for fields with all six components of electric and magnetic field present. Such fields require both an electric and magnetic vector potential function to generate them. Perhaps it would be just as well to solve the electric and magnetic potential A. Hilbert functions. Function Spaces function rules. space is, in principle, Typically, we will conall fields directly rather than through two

(U, V) = s
v (omitting, of an arbitrary

uvd~ Q

(96)
of one

is defined which,
function Fourier really

in a sense, gives the component of the other.

in the direction component, here that

This appears reain this way that a the comIt is u is found.

sonable when we recall that it is precisely plex conjugate*) function

for the moment, vector

the analogy

between

and function the comThe integra-

spaces becomes obvious. plex conjugate

The reason for including

sign will be shown in a moment.

The concept very simple functions that

of a Hilbert obey certain

tion is performed over Q which may be a one, two, or threedimensional physical space depending on the problem. In our example, the limits correspond to the walls. If u and v are vector functions of position, we alter (96) slightly to include a dot between them, thus signifying the integral of the vector product subsequent In addition Hilbert u. v. In this work, should however, we will conof the of a each sider them to be scalars although derivations to linearity, functions the generalization that are elements axioms. For to the linear

and most useful as well. It consists of a set of belonging to this space as being of any particular

sider those functions

be fairly

straightforward.

those that are possible solutions

field prob-

lem we wish to solve. For example, the field within a threedimensional region bounded by a perfectly conducting surface, having some distribution of charge solution of the Poisson equation enclosed, is the

space must satisfy the following u and v belonging that

pair of functions

set, a numaxioms: (97)

ber (u, v) is generated

obeys the following

(u, v) = (v, U)*; (aIu, + a,uz, v) = al(ul, v) + a,(u,, v);

(98)

V1$

= ~.

e @is some function


solution of position, i.e., o(P). We know of functions

(95)
if that the of the Note that

(u, u) > o; (u, u) = O the definition then u = O. product

(99)
(100)

must be one of or a combination

of inner

(96) satisfies re-

form u(P) = sin (lm/a)x. sin (nzr/b)y. sin (nm/c)z in a rectangular region. a, b, and c are the dimensions of the rectangular region and 1, m, and n are integers. If the conducting boundary of the box is held at zero potential any one or summation of harmonic functions u will vanish at the walls and will likewise give zero potential there. These components of a Fourier series are akin to vector components of a real space insofar as a summation of particular proportions of functions yields another function whereas vector summation of components defines a point in space. Thus, a summation of harmonic components of the above form defines a particular function which, by analogy, we consider to be a point in an abstract function space. For this reason, such functions are often called coordinate functions. The number of dimensions may be finite or perhaps infinite. The Fourier series is an example of a particular function space consisting however, of orthogonal these functions coordinate functions. In general, need not be orthogonal.

quirements (97~100) for all well-behaved functions. Note also that in a real Hilbert space (i.e., one spanned by real functions), (u, v)= (v, u). From axioms (97) and (98) it is easy to see that (u, au) = a{v, u)* = a(u, v)
(101)

where a is a complex number here. As a result of these definitions it is clear that an inner product whose factors are sums can be expanded according to the rules for multiplication of polynomials. The essential difference is that the numerical coefficient of the second factor must be replaced by its complex conjugate in carrying it outside of the brackets. It is clear now why the complex conjugate must be employed in axiom (97). Property (99) states that (u, u)2 O. Therefore, due to the linearity of the function space, new elements au must also satisfy (au, au) ~ O where a is any complex number. If property (100) were of the form {u, v)

WEXLI?R

: COMPUTATION we would

OF ELECTROMAGNETIC have that (au, au) =a(u,

FIELDS

431 concentrate i.e., on certain homogeneous boundary conditions,

= (v, u) then

u) which, and Thus

for arbitrary not positive (99) would

complex be violated.

a, would

be a complex

quantity

(58}(60)

with

g =p =g = O. IfL= (V2+k2) of the one vector component current of an inhomotimes p.

or equal to zero, perhaps

even negative.

Ifj=

O, (103) becomes Laplaces equation. equation.

then (103) represents that the norm geneous Helmholtz vector potential (102) is akin to the

In the following, of each function,

we will assume implicitly defined by ]/lL/l == <~~

u is then one component

and ~ is an impressed

is finite.

The operation

beneath

the radical

inner product of a vector with itself and so I(UI\ is, by analogy, a measure of the length or magnitude of the function. Insofar as a field is concerned, Formulation differential in: equations, there are two classes and the eigenvalue point out They probsignificant it is its rms value.

To begin the solution, we must consider a set of all functions that satisfy the boundary conditions of the problem and which are sufficiently differentiable. Each such element u of the space belongs to the field of definition of the operator L. Symbolically, u= D.. We then seek a solution of (103) from this function We consider space. operators. The self-adjointon .r only. (s is the To have a se~only se~-a~oint

B. The Extremum Among problem. for elliptic elliptic Forsythe problems

ness of L means that (Lu, v) (u, Lv), in which u, v= DL, is a function boundary of u and v and their derivatives of Q and may be at infinity.) we must have

we are interested that variational

the deterministic

and Wasow

[58, pp. 163-164]

adjoint problem

approaches

are computationally problems

(Lz.L,

but not for hyperbolic to parabolic method partial

problems,

V) = (u,
Lv). in the proof is a requirement upon the associated

(106) of the minon in the is self-

leave its application ciple behind integration The latter

open. The printo direct equation. of a to

It will be seen that (106) is required imal functional theorem

the variational

in solving elliptic differential

and therefore

lems rests on an approach approach

which is an alternative attempted

of the associated is often conversion

those problems treated by variational methods fashion described here. Whether or not an operator adjoint depends strongly

by means

boundary

Greens function

of a boundary

value problem

an integral equation. It frequently becomes overly complicated, if not altogether impossible to handle, because the Greens function itself is difficult to derive. On the other hand, a variational formulation presents an alternative choiceto find the function that minimizes the value of a certain integral, The function that produces this minimal value is the solution the alternative However, of the field problem. procedures On the face of it problem. has for determinseems as unappealing function, as the original available

conditions. In addition, the self-adjoint operator will be required to be positive definite. The mathematical meaning of this is that

(Lu,
whenever U=(). u is not identically

U)

>0

(107)

zero and vanishes only when is best illustrated by a

The significance simple example.

of these terms

Let L=

V2. Therefore

due to certain

ing this minimizing great computational

the variational

formulation

(Lu, U) = For convenience, identity

advantages.

In addition,

convergence

vvzudo. Q

(108) Greens

can be guaranteed under certain very broad conditions and this is of considerable theoretical and numerical consequence. Illustrative of the generality notation of the method is the use of a of

take u and v to be real functions.

S::ds=JnVuVodQ+
converts (108) to the form (h,

SnvV2udfi

(109)

general operator

L. In practice,

a great variety

operations may be denoted by this single letter. 1) The deterministic problem: The deterministic is written Lu where~=~(l) is a function =

problem

v} =

f
If

(103)

s
$?

Vu. Vudn

all Vd!s

(110)

.s drb

of position.

in which the last integration

is performed

over the boundary. analogue of

L=vz
and .f~ is a known charge distribution,

(10.!J

n is the outward normal. The one-dimensional (110) is integration by parts. Similarly

(105) e we require to find u, the Under either the homogeneous Dirichlet or Neumann boundary conditions, the surface integrals in (1 10) and (1 11) vanish. Under the homogeneous Cauchy boundary condition, they do not vanish but become equal. At any rate, L is therefore self-adjoint under any one of these boundary conditicms or under any number of them holding ove~ various

solution of the problem under appropriate boundary conditions. (The minus sign in (104) makes the operator positive definite, as will be shown.) Commonly, these boundary conditions take the forms (58)-(60). In the first instance we will

432

IEEE

TRANSACTIONS

ON MICROWAVE

THEORY

AND

TECHNIQUES,

AUGUST

1969

sections of the boundary. Property (106) is akin to matrix symmetry. Positive definiteness of L is readily observed by making u= v in (110) and substituting geneous boundary conditions. for the homogeneous that u> O. It is a consequence the following then the equation Cauchy any of the previous homoAn additional requirement, condition
to satisfy

self-adjoint under the stated boundary conditions. It is also required that the trial functions come from the field of definition of the operator L; i.e., they must be sufficiently differentiable and satisfy the boundary conditions. Otherwise, a function, which is not a solution, might give a lesser value to F(u) and delude us into thinking imation to the solution. From is (116), note that the minimal that it is a better approxvalue of the functional
(117)

(107), is

of these properties if the operator cannot

that we can make L is positive definite

statement: Lu=f

have more than one soluwhich


get occurs for

F ~i. = _ (LuO, ZJo)


the exact solution UO. Taking

tion. The proof is simple. Suppose the equation to have two solutions U1 and w such that Lul=f and Lu2 =f. Let w= u1 uZ. Since the operator requirement) we obtain (Lw, definite, is a linear one (a further w)= O. Since L is positive

L=

V2, we

Fmin =

we must then have w = O and so u1= uZ, thus proving

s Q

uOV2uOdfi

(118) say that

that no more than one solution can exist. This is simply a general form of the usual proofs for uniqueness of solution of boundary-value ferential equations, For the solution problems of a partial involving differential elliptic partial difit was

where the integration is over a volume Q. Now, O(S)= O. Therefore, using Greens theorem,

equation,

Fmin = JI

Vu. I 2dL?

(119)

stated earlier that we will attempt to minimize a certain integral. The rule for forming this integral, and subsequently, ascribing a value to it, is a particular example of afunctional. Whereas a function produces a number as a result of giving values to one or more of independent produces a number or more functions between prescribed variables, limits. a functional of one that depends on the entire form

This integral is proportional to the energy stored in the region. The field arranges itself so as to minimize the contained mizing upon energy ! approach, point, used for finding method. the miniAs it relies to the is function, locating is the Rayleigh-Ritz a stationary is located, The most common

we wish to ensure that derivative

It is, in a sense,

once such a point solution.

it in fact corresponds

some measure of the function. product (u, v). The functional the deterministic problem, is

A simple example is the inner with, for the soIution of

This is important

because a vanishing

we are concerned

a necessary, although not a sufficient condition for a minimum. In other words, if a function uOG DL causes (112) to be stationary, is UOthen the solution show that this is so. Let 5(,) = F(uo + q) where c is an arbitrary real number. expressions, of (103)? It is easy to

F = (Lu, u) 2(u, ~)
in which we assume that u and f are real functions. general form, for complex functions is

(112) The more

F(uo)
Using

(120)
(112), substitutL to be self-

F = (Lu, u) (ujj)

(f, u).

(113)

ing the appropriate adjoint, we obtain

and taking

It is seen that the last two terms of (1 13) give twice the real part of (u, f ), Concentratingon(112), we will now show that if L is a positive definite operator, and if Lu =f has a solution, then (112) is minimized by the solution uO.(The proof of (113) is not much more involved.) Any other function u~l)~ will give a larger value to F. The proof follows. Take the function U. to be the unique solution, i.e.,

~(e) = 2c(Lu, after some algebraic

f, ~) + e2(Lq, q) Differentiating

(121)

manipulation.

dti = 2(LU0 f, 7?)+ w% de


which must vanish at a stationary occurs when c= O, therefore point.

~)

(122) this

Lu. = f.
Substitute (114) into (112) for f. Thus

(114)

By hypothesis,

(Luo j, TI) = O. (115)


If this is to hold,
LUO =f

(123)

F = (Lu, u) 2(u, Luo).

for arbitrary

v, then we must have that

Add and subtract (LuO, UO)to the right-hand side of (1 15) and rearrange noting that if L is self-adjoint (Luo, u)= (Lu, Uo) in a real Hilbert space. Finally, we obtain

identically. In other words, the stationary point corresponds to the solution. 2) The eigenualue problem: Functional (115) cannot a priori be used for the eigenvalue problem
(lp~)

F = {L(u w), ?A WJ) (Luo, Z@).


As L is positive only if definite,

(116)

the last term on the right is positive because the right-hand this minimal functional definite and

Lu =

iU

always and the first is ~ O. F assumes its least value if and u= Uo. To requires summarize, theorem that the operator be positive

side of (124) is not a known functional,

function

as is f in (103). The relevant problem, is

for the eigenvalue

WEXLER:

COMPUTATION

OF ELECTROMAGNETIC

FIELDS instead, (125) Successive eigenvalues orthogonal equation, and eigenvectors

433 are found

~ = @u, u)
(u, u) where UG DL. Equation bound, attained (80) is one particular quotient. instance of it.

by defining function, in finding

spaces as before. whose solution is a minimizing we wish to equation, etc.

The differential is known It is often called the Rayleigh eigenvalue function. of operator If F~in is the lowest eigenC. Inhomogeneous Let of

as the Eulers equation. We are interested whose Eulers equation, equations Laplaces

functional

for some ul# O, then F~in = x is the lowest L and U1 is the corresponding

solve, e.g., the Helmholtz

Boundary

Conditions homogeneous boundlines at

The proof of the preceding statements is quite direct. v be an arbitrary function from the field of definition operator Therefore L, i.e., VG DL. Let a be an arbitrary ul+a~e DL. We want to investigate about F is stationary

In solving Lu =f we have considered ary more various conditions important potentials) exclusively. problems cannot Such (e.g.,

a restriction multiconductor This

causes the because

real number. the conditions

to be excluded. be proved.

happens

under which

u1. Substitute (126)

self-adjointness

Substitute

(58), say, into this statein the

U=ul+a?l into (125) giving (L(u, F= (w + q, As UI and 7 are fixed functions, Differentiate the derivative (Lul, With (127) with u, + (q) F is a function + aq), u, + a~) .

the surface integrals ment. In addition, form

of (1 10) and (111) to verify as well. boundary

the space is nonlinear

Let us express inhomogeneous (127)

conditions

B1tt Is =

h,

B2u /, = bz, . 0 .

(135)

where the B; are linear only of a, get (128) tions of position the most common ditions required respect to a. Then, by hypothesis,

operators

and the b~ are given funcEquations (58>(60) are conof boundary

on the boundary. examples.

The number

vanishzs when a = O. We therefore ~)(~~1, ~1) (L~l, ul)(ul,

depends upon whether of position

or not u is a vector equation.

T?) = O.

and upon the order of the differential Assume that a function ciently differentiable

w exists which is sufficonditions (135).

F= F~i. and u= ZL substitute

(Lu,, u,) from (125) into

and satisfies boundary

(128). Rearranging

u is not necessarily the solution. conditions, B1w/.=bl,Bzwl.=bz,....

As w satisfies the boundary

Since v is arbitrary, LU1 F~i. UI = O and so FniD is the lowest eigenvalue ing eigenfunction. It is fairly (130)

(136) (137) (138)

Putting V=u w )

Xl and U1the correspondwith trial

B,v 1, = O, B2u /, = O as the B~ are linear operators.

easy to show [62, pp. 220-221]

We have now achieved homoof (139)

that if the minimization of (125) is attempted functions u orthogonal to uI, in the sense (u, u,)
= o,

geneous boundary conditions. Instead of attempting a solution Lu=f we examine Lv = L(u

(131)

that F~i. equals the second eigenvalue AZ. Similarly, defining a Hilbert space orthogonal to U1and u2.,the next eigenvalue and eigmvector results, and so on. As the minimum of (125) corresponds value k, we can rewrite the functional ~ < The numerator (Lu, U) (132) (u, u) as L is a positive definite to the lowest eigenform:

W) (140)

= j Lw. Let f,= and so we can now attempt fLw a solution Lv =


fl

in the following

(141) of (142)

of (132) is positive

operator. The denominator rearranging the inequality

is positive

by (99). Therefore,

under homogeneous boundary conditions (13 8). In any particular case, it still remains to prove self-adjointness for operator L with functions satisfying (138). If this can be accomplished, then we may seek the function that minimizes F = (h, Substitute

(Lu, U) h{w, U) >0.

(133)

We know that (133) is an equality only for the correct eigenvalue and eigenfunction. Consequently, the left-hand side is otherwise greater than zero. Therefore, as an alternative to minimizing (125), we can seek the solution of (1 24) by minimizing F = {Lu, u) ~(u, u) (134)

v) ~(v,

fl).

(143) expansion, (144)

(137) and (141) into


+ (u, Lw)

(143). After
(h, w)

F = (Lzt, U) 2(tLjf)

+ ~(w, f) @w,

w}.

434 f is fixed and w is a particular

IEEE TRANSACTIONS function selected (which we

ON MICROWAVE

THEORY AND TECHNIQUES,

AUGUST 1969

need not actually know). Therefore, the last two terms are constant and can play no part in minimizing the functional as we have assumed that u is selected from the set of functions that satisfies the required from (144) because of this. It now remains to examine separated. (u, Lw) (Lu, w) in the hope If this attempt is successful, that u and w maybe boundary condition. Otherwise w would depend on u. The last two terms may be deleted

that the existence of a function w was an unnecessary assumption. Although we shall not demonstrate it here, functional may be derived for the inhomogeneous involving Neumann to formulate and the Cauchy problems. solution functions D. Natural It is also not too difficult problems

of electrostatic of position. Boundary

media that are

Conditions that trial functions substituted

then an amended version of (144) may be written which excludes the unknown w. Let us illustrate these principles with a practical example. Solve

Thus far, we have required

V%=f
under the boundary condition
u(s) =

(145)

into (112) should each satisfy the stipulated boundary conditions. Except for the simplest of boundary shapes, such a constraint makes it practically impossible to select an appropriate set of trial functions. It turns out, however, that 8u/~nl. =p(s) and du/&z] ~+u(s)u(s) = q(s) are natural boundfor the operator with L= V2. The meaning that the minimal this confidence of this value under to test any sufficiently differentiary conditions

g(s). is

(146)

is that we are now permitted able functions attained

the certainty

The symmetrical

form

of Greens theorem

by (112) will be due to the solution

and none other.

sn
fourth

(Wvu

Uvw)dfl =J( W ;
8

ZL:

On the other hand, we cannot entertain

ds (147) and

the Dirichlet boundary condition. It is easy to show this for the homogeneous problem. The form of the functional is

Neumann

where n is the external

normal

to s. Therefore,

the third

terms of (144) are F=sQ(]Vu12-2.fu)dQ.


(u, Lw)

(154) be in the field of

(Lu, W) =

WV2U

wv2w)df2 (148) Substitute u= UO+CZVwhere n need not dw definition of L. Differentiate with respect to a, make a= O, and set the result to zero. Finally, employing Greens formula,

Jfl Since u(s) we have


(zL, Lw)

du
W~n U

-s(s

ds. 8n ) (149)

= w(s) = g(s)

(LzL, w) = ~ s

(,;

g ;;)ds.

(150)

Nowhere tkJdn

has q been required the solution

to satisfy is found

boundary
V2U0

condi-

tions. As v is arbitrary Only the first term on the right-hand side of (1 50) is a func= O. Thus tion of u. In addition to neglecting the last two terms of (144), the last term of (150) maybe disregarded as well. We are then left with F= Simplify a new functional to be minimized boundary E. Solution A number conditions.

(155) can hold only if with

=f

and

appropriate

by the Rayleigh-Ritz of functional

Method the minimiza-

have been derived,

sQ Vzu-ly+f%
(109)

15])

(151) using identity

tion of which produce solutions of differential or integral equations. The remaining question is how to locate the minimizing function. The most popular approach is the Rayleigh-Ritz method. Assume a finite sequence of n functions

fudfi. a integrals cancel and where the aj are arbitrary numerical (156) into (112). Therefore coefficients. Substitute

Because of (146), the second and third we are left with

which is to be minimized inhomogeneous boundary

for the solution of (145) under conditions (146). It happens, in

j,k=l

j= 1

this case, that (153) has the same form that homogeneous boundary conditions would produce. It also turns out, here,

We now wish to select coefficients imum, i.e.,

aj so that (157) is a min-

WEXLER : COMPUTATION dF
-==0;

OF ELECTROMAGNETIC

FIELDS of Lu =f and Lu = Au as n approaches infinity,

435 In practice, degree of should

~=l,z...~.

aai Rearranging (157) into powers

(158)

the matrices

need not be very large for determine

a high

accuracy to result. Notice of a;, These characteristics be employed

also that these matrices are dense. that direct methods

in their solution.

F. Some Applications + terms not containing Now, write k instead a~. and The bibliography lists several useful references for the principles of variational methods. See, for example, [58], [61], and [63]. One of the most detailed able is [62]. Bulley F = (L+i, @~)U;2+ 2 ~
k#i

of j in the second summation,

treatments

availshaped

assuming that L is self-adjoint (U@~,dk)a~ak (160)

[57] solves the TE modes in an arbitrarily approach.

guide by a Rayleigh-Ritz

The series of trial func-

2(f, @~)U;+ Differentiating zero

. . , .

tions, representing the axial magnetic field, are each of the form x~y thus constituting a two-dimensional polynomial over the waveguide cross section. Obviously, these trial functions cannot be chosen to satisfy all boundary conditions. However, it turns out that the homogeneous Neumann condition (which H. must satisfy) is natural and so no such constraint need be placed on the trial functions. On the other hand, method fairly the homogeneous E,) is not is inapplicable Dirichlet satisfied condition naturally (which is imis posed upon and so Bulleys in ap-

(160) with respect to Ui and setting equal to

S
k=l

(L&,

~, > ak = (j, ~,) (161) in matrix form

(161)

where i= 1, 2, . . . , n. Writing

in this case. If the guide boundary has difficulty function everywhere.

complicated,

a single polynomial the waveguide

proximating which may be solved for the coefficients al, az, . . . , an by [63, pp. regular

the potential

In such a in

case, Bulley subdivides the methods of Section II-A. By a very similar approach 193194], the Rayleigh-Ritz value problem gives

into two or more fairly coefficients

regions and solves for the polynomial

[62, pp. 226-229], method applied

to the eigen-

each. In doing this, his approach is virtually that of the finite-element method. It differs from the usual finite-element method in the way that he defines polynomials that straddle subdivision boundaries while others vanish there. Thomas multipliers. constraining neous Dirichlet [67] solves the TE problem by the use of Lagrange In this way, he permits all trial functions while the final result to approximate Although the homogenot essential boundary condition.

1?1 ..= 1]
an which is a matrix eigenvalue problem of form trial functions are orthonormal, along the diagonal giving the eigenvalue

to his method, he employs a polar coordinate system with polynomials in r and trigonometric 0 dependence. Another possible approach, when boundary conditions are not natural, is to alter the functional in order to allow trial functions to be unrestricted [64, pp. 1131-1 133]. By a transformation, the microstrip problem Yamashita and Mittra [68] reduce mode. to one dimension. They then solve

(21). If the k occurs only

the fields and line capacitance,

of the quasi-TEM

The finite element method is an approach whereby a region is divided into subintervals and appropriate trial functions are defined over each one of them. The most convenient shape is the triangle, for two-dimensional problems, and the tetrahedron in three dimensions. These shapes appear to offer the greatest convenience in fitting them toegether, in approximating complicated boundary shapes, and in satisfying boundary conditions whether or not they are natural. Similarly, (162) would have diagonal terms only, giving the solution by the Fourier analysis method. Because the functions are all real and the operator Silvester [65], [66] has demonstrated the method in waveguide problems. An arbitrary waveguide is divided into triangular subintervals [65]. If the potential is considered to be a surface over the region, it is then approximated by an array of planar triangles much like the facets on a diamond. Higher approximations are obtained by expressing the potential within each triangle Because of high accuracy, by a polynomial [66]. the finite element approach

is self-

adjoint, from (97) and (106) we see that both the A and B matrices are symmetric. Also, B is positive definite which permits it to be decomposed, as in (23), using real arithmetic only, thus resulting in an eigenvalue problem of form (22). It can be shown that (162) and (163) ammoach the solution

436 appears useful for three-dimensional

IEEE TRANSACTIONS problems for civil without engineeras the are de-

ON MICROWAVE

THEORY AND TECHNIQUES,

AUGUST 1969

requiring excessive computing The method was originally ing applications application previous Other scribed [70], in microwaves references). implementations in [59] and [60]. VI.

[69]. expounded [55],

0 +i.,(t)
+I+l(t) 0

-+Jt)

EG=lRl
n n I T* * ~t

.A

[71 ], but has recently (e.g.,

seen increasing

[56] as well methods

of variational

Fig. 9.

Single node, DSCX analog of the onedimensional diffusion equation.

HYBRID COMPUTATION

In Section IV-C two methods for the solution of initial value (transient) problems were introduced. Through the explicit approach, one is able to predict the potential of any node at the next time increment cent node potentials. as a function of a few adjaa stability The disadvantage is that

,+O(t)
G

l+,(t)
.2

constraint demands very small time steps, On the other hand, the implicit method does not suffer from instability, and permits larger time steps, but requires simultaneous solution of all node potentials for each step. As a result, both techniques are very time consuming and sometimes impossibly slow. The hybrid computer offers a significantly different ap-

)+2(
t)

~+3(t)
.6

~$d t)
.8

FJt)
x

.4

1.0

Fig, 10. Simultaneous solution of the one-dimensional difTusion equation, by DSCT analog, with four internal nodes.

proach to the problem. It consists of two major partsan analog and a digital computer. The analog is a model that obeys the same mathematical laws as the problem being considered. So the analog, which is presumably easier to handle, simulates the response of the system being studied. More precisely, the particular form is known as an electronic dl~erential electronic units (which perform etc.) together, it is possible equations under appropriate of analog intended here analyzer. By connecting multiplication, differential [74]. The

time @i--l(t) and &-l(t) are forcing functions which are as yet unknown. Assume, for the moment, that we know them and let us see how the analog computer can produce the time response Oi(t). Fig. 9 indicates, symbolically, the operation of an analog computer an inverter. known, grator in solving Assume (166). Circles that functions indicate multipliers, the one larger triangle represents an integrator, and played and the smaller back into

r&.l(t) and ~ti l(t) are


the inteby This

recorded

perhaps,

integration,

with the initial.value

4,(0). They are all multiplied in the ratios indicated.

to solve ordinary initial conditions

l/h2 and fed into the integrator is then integrated giving @,(t) = :h,

solution is given as a continuous waveform. Whereas the digital computer will solve the problem in a number of discrete almost natural time-consuming immediately. ordinary steps, the analog computer solver. gets the is faster; answer it is a The analog

s, (+i-l(t)

2@i(0 + b+l(t))dt. the negative

(167)

differential

equation

In fact, the integrator

produces

of (167), i.e.,

This is fairly obvious for an ordinary differential equation, but how is a partial differential equation to be solved? Consider a one-dimensional diffusion equation Wp ==% (Many of the following comments apply to the wave equation as well.) Discretize the spatial coordinates at the ith node. We have, using central differences, 11$ (165)

@;(t). This is fed back thus completing the circuit and allowing the process to continue to any time t. An inverter follows the integrator to alter the sign if required. We do not, of course, know the forcing functions

+;-l(t)

and &l(t). They are the responses of adjacent nodes, and they in turn depend upon forcing functions defined at other @o(t) and @s(t) nodes. However, the boundary conditions are known in advance as well as the initial conditions 4,(0) for all nodes, or +(x, t) where t= O. To solve the entire finite difference system at one time requires as many integrators as there are internal nodes available. This is demonstrated in Fig. 10. (The factor l/h2 is incorporated into the integrators for simplicity.) What this

At boundaries, forward or backward differences must be used. We have therefore reduced the partial differential equation to a system of ordinary differential equations, one at each node point. This is known as the DSCT (discrete-spacecontinuous-time) analog technique. Other formulations exist as well. The time response of the potential at i, i.e., cA(t),

scheme does, in fact, is to solve a system of four coupled ordinary differential equations in parallel. The analog has two obvious advantages: 1) rapid integration; and 2) parallel processing. In order to reduce discretization number available, of nodes. If the intention the number tials simultaneously, error, one increases the of integrators

is to solve all node potennumber

due to the limited

may be found by integrating

(166). Other functions

of

of nodes must be small. One alterna-

WEXLER : COMPUTATION tive strategy is to attempt of the digital an initial node, initial relaxation

OF ELECTROMAGNETIC an iterative procedure. just technique

FIELDS

437

reminiscent

This is done by making response of each internal time responses as made this upon being converter)
+2(7

guess at the transient choosing

frequently

constant

km?
,,
, =rd

,,, ,,,
I

-,
\

shown by the uppermost

curve in Fig, 11. Having

guess at each nodes time response, each @i(t) is solved

sequentially as described previously. Each di(t), solved, is transmitted via an ADC (analog-digital

to the digital computer where it is stored as discrete data in time. Attention is then focused upon node i+ 1 with adjacent potential puted responses transmitted with from store through d,(t) DAC (digital-analog acting as forcing conversion) functions. equipment. OwI(t) is then comand @A,(t) by

o
Fig. 11.

t
Iterative DSCT solution of the diffusion equation.

by the analog

the smoothed

The flow of data is indicated

arrowheads in Fig. 11. In sequence then, node transient responses are updated by continually scanning all nodes until convergence is deemed to be adequate. In effect, this procedure involves the solution of coupled ordinary differential equations, An additional one for each node, by iteration. refinement is to solve, not one node potenof them. The number out before, limited This parallel problem digital that can be of The analog computer technique does not as easily various solve two-point value problems. initial values at speeds beboundary A common value problems as it does initial by the amount processing
Fig. 12. (a) Iterative, parallel processing scheme for the initial problem. (b) Method of lines for the elliptic problem. value

tial at a time but groups catered for is, as pointed analog equipment the solution vantages

available.

of the entire

and is one of the ad-

is to attempt

over the purely

scheme. Furthermore,

cause of the continuous time response (i.e., infinitesimal time steps), we have an explicit method without the disadvantage of instability. Parallel solution of blocks of nodes is the logical approach for two dimensional initial value problems. Fig. 12(a) shows one possible Forcing format from involving correspond would the solution to solutions with at nine nodes. at the twelve alterations to The set of nodes of such logcomfunctions

one end of the strip until the far-end boundary condition is satisfied. This can often be very wasteful. A better idea is to solve each two-point boundary value problem as two initial value problems [7, pp. 239240], [74, pp. 8385]. This could be done for each strip in turn or perhaps in groups. The entire region must then be scanned repeatedly, in this fashion, until convergence occurs. For other approaches to the solution of partial differential equations by hybrid computation, see [72], [73], [74], [76], and [77]. Finally, method the hybrid system makes the Monte one. Carlo [74, pp. 239242, 360] a more attractive

nodes excluded being considered the block format puter. Hsu

the enclosed region. scan the region

near boundaries.

The making

ical decisions, as well as storage, is the job of the digital and Howe by hybrid [75] have presented computation. explained a most

interesting mentioned Hsu and at available

It should be emphasized that hybrid solution of partial differential equations is still in its infancy. This section, in part an optimistic forecast, is intended to show that digital computing considered has no monopoly and certainly should not be a more respectable branch of computing. The

feasibility equations above

study on the solution fully

of the wave and diffusion The procedures in their their paper. computer results

are more of their

Howe did not actually the time

have a hybrid

analog machine integrates rapidly, and the digital machine has the ability to store information and make logical decisions. It therefore each in its own gained. VII. CONCLUDING REMARKS stands to reason that working domain, substantial advantages as a pair, will be

experiments;

were obtained

through a digital simulation partial differential equations

study. Hybrid computation of is still in its early stages and

little has been reported on actual computing times. However, some preliminary reports indicate speeds an order of magnitude greater than digital computing for such problems. An intriguing possibility, for hybrid solution of the elliptic problem, presented
that

The intention of this paper is to familiarize the reader with the principles behind the numerical analysis of electromagnetic fields and to stress the importance understanding of the underlying mathematics. of a clear Improper

is the method

of lines. The mathematical The principle


along one coordinate

theory is it is
only,

in [62, pp. 549566].


is performed

behind

discretization

in

a two-dimensional producing

problem, prisms.

giving

us a sequence two coordinates a number

of are of

strips (Fig. 12(b)). In three dimensions, discretized,

numerical technique causes one to run up against machine limitations prematurely. In the immediate future, emphasis should perhaps be placed upon the development of finite difference and variational techniques for solving fields having all components of electric and magnetic field present everywhere. To date, with a few exceptions (e.g., [59, pp. 1721 88]), most methods

We thus obtain

coupled difference-differential equations. Each equation is then integrated under boundary conditions at each end of its strip and with forcing functions supplied by adjacent strips.

438

IEEE

TRANSACTIONS

ON MICROWAVE

THEORY

AND

TECHNIQUES,

AUGUST

1969

appear to permit solution only of fields deriveable from a single scalar potential. It is not difficult to formulate and solve field problems in a multidielectric region, but it may not correspond to the actual electromagnetic problem. This is indicated microstrip Variational greater extent by the continuing waves. methods than are being implemented Hybrid now to a is ever before. computation discussion of quasi-TEM

Matrices

and Linear

E@ations

[14] B.A. Carr&, The determination of the optimum accelerating facComputer J., vol. 4, pp. 73-7S, tor for successive over-relaxation,

1961. [15] G. E. Forsythe and C. B. Moler,


Algebraic Systems.

Computer

Solution

of Linear

Englewood

likely to assume a significant, or perhaps commanding, role in field computation due to its greater speed and flexibility. Ahnost certainly, general purpose scientific computers will permit optional analog equipment to be added, the analog elements to be connected and controlled from the program. Beyond this, it is virtually impossible to predict. It is futile wishing problem machine sight into for computer technology already. to keep pace with We have outstripped hope our all solving requirements.

capabilities the physical

The greatest techniques.

is in the ability,

development

of new numerical

Due to his in-

processes and his modeling suited to this task.

the engineer is ideally

ACKNOWLEDGMENT The author wishes to express his appreciation to P. A. Macdonald and G. Oczkowski for discussions on numerical and programming conversations and hybrid on by M. J. Beaubien W. G. Mathers, number of helpful techniques microwave and to Dr. J. W. Bandler problems. Views for expressed Dr.

and B. H. McDonald, respectively, Energy of Atomic

on finite differences were invaluable. made a

Theory. [16] J. N. Franklin, Matrix tice-Hall, 1968. [17] L. A. Hageman and R. B. Kellogg, Estimating optimum overvol. 22,. . DV. relaxation parameters, Math. Computation, . 60-68, January 19&S. [18 ] T. Lloyd and M. McCallion, Bounds for the optimum overrelaxation factor for the S.O.R. solution of Laplace type equations over irregular regions; Computer J., vol. 11, pp. 329-331, November 1968. [19] T. J. Randall, A note on the estimation of the optimum succesComputer sive overrelaxation parameter for Laplaces equation, J., vol. 10, pp. 400-401, February 196S. [20] J. K. Reid, A method for finding the optimum successive overComputer J., vol. 9, pp. 201-204, August relaxation parameter; 1966. Methods for the Solution of Equations. [21 ] J: F. Traub, Iterative Englewood Cliffs, N. J.: Prentice-Hall, 1964. Iteratioe Analysis. Englewood ClilTs, N. J.: [22] R. S. Varga, Matrix Prentice-Hall, 1962. [23 ] D. C. Walden, The Givens-Householder method for finding eigenvalues and eigenvectors of real symmetric matrices, M.I.T. Lincoln Lab., Cambridge, Mass., Tech. Note 1967-51, October 26, 1967. of Numerical Matrix Inversion and [24] J. R. Westlake, A Handbook Solution of Linear Equations. New York: Wiley, 1968. New York: [25] J. H. Wilkinson, The Algebraic Eigenvalue Problem. Oxford, 1965. [26] J. H. Wilkinson, Householders method for the solution of the Computer J., pp. 23-27, April 1960. algebraic eigenproblemfl

CM%, N. J.: Prentice-Hall, 1967. Englewood Cliffs, N. J.: Pren-

computing

of Canada Ltd.,

Finite

Dl~erence

Solution

of Partial

Dl~erential

Equations

suggestions. Also to be thanked

are librar-

[27] M. J. Beaubien

ians, R. Thompson, Mrs. F. Ferguson, and Mrs. D. Globerman, for locating and checking many references, and Miss L. Milkowski for carefully typing the manuscript. Thanks are due W. J. Getsinger, Guest Editor of this Special Issue, for the invitation to write this review paper.

[28]

REFERENCES General Numerical Analysis

[29]

[30]

[1] I. S. Berezin and N. P. Zhidkov, Computing Methods, vols. I and II. Oxford: Pergamon, 1965. Methods of [2] D. K. Faddeev and V. N. Faddeeva, Computational Linear Algebra. San Francisco: Freeman, 1963. to Numerical Analysis. Reading, [3] C.-E. Froberg, Introduction Mass.: Addison-Wesley, 1965. Methods. London: H. M. [4] E. T. Goodwin, Modern Computing Stationery Office, 1961. Methods for Scientists attd Engineers. [5] R. W. Hamming, Numerical New York: McGraw-Hill, 1962. Analysis. New York: Wiley, [6] P. Henrici, Elements of Numerical
Introduction to Numerical Analysis. New York: McGraw-Hill, 1956. [8] E. Isaacson and H. B. Keller, Analysis of Numerical Methods. New York: Wiley, 1966. [9] M. L. James, G. M. Smith, and J. C. Wolford, Applied Numerical Methods for Digital Computation with Fortran. Scranton, Pa.: International Textbook, 1967. [10] L. G. Kelly, Handbook of Numerica[ Methods and Applications. Reading, Mass.: Addison-Wesley, 1967. [11] C. Lanczos, Applied Analysis. Englewood Cliffs, N. J.: PrenticeHall, 1956. [12] M. G. Salvadori and M. L. Baron, Numerical Methods in Engineering. Englewood Cliffs, N. J.: Prentice-Hall, 1964. [13] J. H. Wilkinson, Rounding Errors in Algebraic Processes. London: H. M. Stationery Office, 1963. [7]

[31] [32]

[33]

[34] [35] [36]

1964. F. B. Hildebrand,

[37] [38 -- 1

[39]

and A. Wexler, An accurate finite-difference method for higher order waveguide modes, IEEE Trans. Microwaue Theory and Techniques, vol. MTT-16, pp. 1007-1017, December 1968. C. T. Carson, The numerical solution of TEM mode transIEEE Trans. Microwave mission lines with curved boundaries, Theory and Techniques (Correspondence), vol. MTT-15, pp. 269270, April 1967. I. A. Cermak and P. Silvester, Solution of 2-dimensional field problems by boundary relaxation, Proc. ZEE (Londpn), vol. 115, pp. 1341-1348, September 1968. F. Ceschino, J. Kuntzmann, and D. Boyanovich, Numerical SolaEnglewood Cliffs, N. J.: Prenticetion of Initial Value Problems. Hall, 1966. Treatment of Differential Equations. L. Collatz, The Numerical Berlin: Springer, 1960. J. B. Davies and C. A. Muilwyk, Numerical solution of uniform hollow waveguides with boundaries of arbitrary shape; Proc. IEE (London), vol. 113, pp. 277-284, February 1966. J. W. Duncan, The accuracy of finite-difference solutions of IEEE Trans. Microwave Theory and TechLaplaces equation; niques, vol. MTT-15, pp. 575582, October 1967. Methods for G. F. Forsythe and W. R. Wasow, Finite-Difference Partial Differential Equations. New York: Wiley, 1960. Solution of Ordinary and Partial Differential L. Fox, Numerical Equations. Oxford: Pergamon, 1962. H. E. Green, The numerical solution of some important transIEEE Trans. Microwave Theory and mission-line problems, Techniques, vol. MTT-13, pp. 676692, September 1965. Numerical Analvsis of El~irXic BoundD. Greensvan, Introductory ary Value Problems. Ne; York: Harper a;d R;w, i965. C. B. Moler. Finite difference methods for the eizenvalues of Laplaces operator, Stanford University Comp&er Science Dept., Stanford, Calif., Rept. CS32, 1965. H. Motz, The treatment of singularities of partial differential equations by relaxation methods; @art. Appl. Math., vol. 4, pp.

371377, 1946. [40] C. A. Muilwyk and J. B. Davies, The numerical solution of rectangular waveguide junctions and discontinuities of arbitrary

WEXLER : COMPUTATION

OF ELECTROMAGNETIC

FIELDS
[60]

439 Matrix methods for field problems: Proc. IEEE, vol. 55, pp. 136-149, February 1967. Methods of [61 ] L. V. Kantorovich and V. I. Krylov, Approximate Higher Analysis. Groningen, Netherlands: Interscience, 1958. Variational Methods in Mathematical Physics. [62] S. G. Mikhlin, New York: Macmillan, 1964. Methods for [63] S. G. Mikhlin and K. L. Smolitskiy, Approximate Solution of Differential and Integral Equations. New York: Elsevier, 1967. Physics, [64] P. M. Morse and H. Feshbach, Methods of Theoretical Part II. New York: McGraw-Hill, 1953, pp. 1106-1172. [65 ] P. Silvester, Finite-element solution of homogeneous waveguide problemsj 1968 URSI Symp. on Electromagnetic Waves, paper 115 (to appear in Alta Frequenza). [66] A general high-order finite-element waveguide analysis IEEE Trans. Microwave Theory and Techniques, vol. proglam, MTT-17, pp. 204-210, April 1969. [67] D. T. Thomas, Functional approximations for solving boundary IEEE Trans. Microwave Theory value problems by computer: and Techniques, this issue, pp. 447454. [68] E. Yamashita and R. Mitrra, Variational method for the analysis Theory and Techof microstrip lines, IEEE Trans. Microwave niques, vol. MTT-16, pp. 251256, April 1968. [69] O. C. Zienkiewicz, A. K. Bahrani, and P. L. Arlett, Numerical solution of 3-dimensional field problems: Proc. ZEE (London), vol. 115, pp. 367369, February 1968. [70] O. C. Zienkiewicz and Y. K. Cheung, The Finite Element Method in Structural and Continuum Mechanics. New York: McGrawHill, 1967. The [71] Finite elements in the solution of field problems, Eng~eer, pp. 507-510, September 24, 1965.

..-

cross section, IEEE Trans. Microwave Theory MTT-15, pp. 450-455, August 1967. [41 ] R. D. Richtmyer and K. W. Morton, Difference
VO1.

and Techniques, Methods for Ini-

tial- Value Problems.

New

York: Interscience, 1967.

of Laplaces equation in a multidielectric region, Proc. ZEEE (Letters) vol. 56, pp. 13931 394, August 1968. [43 ] D. H. Sinnott, Applications of the numerical solution to LaIEEE Trans. Microwave places equation in three dimensions, Theory and Techniques (Correspondence), vol. MTT-16, pp. 135136, February 1968. Eqaations. [44] G. D. Smith, Numerical Solutions of Partial Differential London: Oxford, 1965. [45 ] K. B. Whiting, A treatment for boundary singularities in finite IEEE Trans. MicroditTerence solutions of Laplaces equation; waue Theory and Techniques (Correspondence), vol. MTT-1 6, pp. 889-891, October 1968. [46] K. S. Yee, Numerical solution of initial boundary value problems involving Maxwells equations in isotropic media, IEEE Trans. Antennas and Propagation, vol. AP-14, pp. 302-307, May 1966.

[42] J. A. Seeger, Solution

Finite Dl~erence

Solution

of Integral

Equations

[47] M. G. Andreasen, Scattering from cylinders with arbitrary surface impedance: Proc. IEEE, vol. 53, pp. 812-817, August 1965.
[48] T. G. Bryant and J. A. Weiss, Parameters of microstrip transmission lines and of coupled pairs of microstrip lines, IEEE Trans. Microwave Theory and Techniques, vol. MTT-16, pp. 10211027, December 1968. [49] R. F. Barrington [59, pp. 62-81] and [60]. [50] K. K. Mei, On the integral equations of thin wire antennas, IEEE Trans. Antennas and Propagation, vol. AP-I 3, pp. 376378, May 1965. [51 ] J. H. Richmond Digital computer solutions of the rigorous equations for scattering problems; Proc. IEEE, vol. 53, pp. 79tk804, August 1965. [52] P. Silvester, TEM wave properties of microstrip transmission lines, Proc. ZEE (London), vol. 115, pp. 43-48, January 1968. [53] R. L. Tanner and M. G. Andreasen, Numerical solution of elecpp. 5361, September tromagnetic problems+ IEEE Spectrum,

Hybrid

Computing

1967.
[54]

P. C. Waterman, Matrix formulation of electromagnetic ing; Proc. IEEE, vol. 53, pp. 805812, August 1965.

scatter-

integration of Laplaces equation within Simulation, pp. 60-69, August 1967. boundaries; [73] J. R. Ashley and T. E. Bullock, Hybrid computer integration of partial differential equations by use of an assumed sum separation of variables: 1968 Spring Joint Computer Conj, AFIPS Proc., vol. 33. Washington, D.C.: Thompson, pp. 585-591. New York: [74] G. A. Bekey and W. J. Karplus, Hybrid Computation. symmetric

[72] J. R. Ashley, Iterative

Wiley, 1968. [75] S. K. T. Hsu and R. M. Howe, Preliminary


hybrid method
Joint

for

solving
Con$,

partial
AFIPS

differential
Proc.,

Variational

Methods
Finite element method for waveguide problems,

Spring

Computer

investigation of a equations. 1968 vol. 33. Washington,

[55] S. Ahmed,

Electron. Letts., vol. 4, pp. 387-389, September 6, 1968. [56] P. F. Arlett, A. K. Bahrani, and O. C. Zienkiewicz, Application

of finite elements to the solution of Helmholtzs equation, Proc. ZEE (London), vol. 115, pp. 1762-1766, December 1968. [57] R. M. Bulley, Computation of approximate polynomial solutions to the Helmholtz equation using the Rayleigh-Ritz method, Ph.D. dissertation, University of Sheffield, England, July 1968. [58] G. F. Forsythe and W. R. Wasow [34, pp. 159-175]. Field Computation by Moment Methods. New [59] R. F. Barrington, York: Macmillan, 1968.

D. C.: Thompson, pp. 601-609. [76] R. Tomovic and W. J. Karplus, High Speed Analog Computers. New York: Wiley, 1962. [77] R. Vichnevetsky, A new stable computing method for the serial hybrid computer integration of partial differential equations; 1968 Spring Joint Compater Corf, AFIPS Proc., vol. 32. Washington, D. C.: Thompson, pp. 143-150.

Miscellaneous [78] S. Ramo, J. R. Whinnery,


Communication Electronics.

and T. VanDuzer, Fields and Waues in New York: Wiley, 1965.

You might also like