You are on page 1of 14

Z-Transform and Its

Application to Development
of Scientific Simulation
Algorithms
ZVONKO FAZARINC
620 Sand Hill Road, 417D, Palo Alto, California 94304

Received 23 December 2009; accepted 29 March 2010

ABSTRACT: Engineers and educators find in computer simulations a powerful substitute for inability of cal-
culus to produce closed form solutions to modern problems. But the validity of quasi-closed-form solutions
produced by computers remains suspect. While no magic test exists, pieces of algorithms can individually be
examined by the Z-transform [E. I. Jury, Sampled-data control systems, John Wiley & Sons, 1958]. Two examples
from engineering and three from education are presented. © 2010 Wiley Periodicals, Inc. Comput Appl Eng Educ
21: 75–88, 2013; View this article online at wileyonlinelibrary.com; DOI 10.1002/cae.20452

Keywords: Z-transform; science simulations; algorithm development; physics algorithms; mathe algorithms

INTRODUCTION convert differential equations into their algebraic equivalents is
matched by the Z-transform’s ability to do the same for difference
While natural phenomena often exhibit high complexity, sci- equations.
ence, engineering, and education are able to penetrate many of One may legitimately question the focus on difference equa-
their secrets by studying individual, separable behaviors. These tions, which were seemingly abandoned long time ago. But besides
are commonly described by differential equations, whose closed- the just quoted reasons for dealing with them here are some addi-
from solutions are of high interest. The Laplace transform, which tional ones that may justify the elevation of difference equations
enables us to derive closed-form solutions is limited to linear into a higher rank in computer science, mathematics, and physics.
cases of orders four or lower. That is far removed from the prac- Computers are completely ignorant about integral-
tical needs but there was no way to obtain a closed-form solution differential equations while the difference equations are native
for phenomena of order five and higher until the advent of the to them. Most algorithms used in scientific computation are in
computer. fact difference equations. Predicting what they will do when
Today we are able to attack problems of almost unlimited subjected to repeated evaluations is of high interest to designers
overall complexity and produce not just static but also dynamic of simulation algorithms. The Z-transform can give us an insight
closed-form solutions in the form of science simulations. These into their behavior.
are gaining in credibility yet remain untested when predicting A traditional reason for appreciation of difference equations
unobservable outcomes. There are no alternate methods avail- is the fact that most problems of interest were described in terms
able to scrutinize predictions of computer simulations but we of differences, resulting in one or more difference equations. The
can test individual components of the overall algorithmic struc- classical differential equations were then derived from them by
ture for their validity. To this end we need a tool that generates letting the independent variables approach zero. This process made
closed-form solutions of mini algorithmic components of the them less accessible to normal mortals and to computers alike. But
overall system. We find in the Z-transform such a tool. This is leaving this controversial view point aside let us focus again on the
a discrete equivalent of continuous transforms such as Fourier fact that valuable insights can be gained into particular execution
and Laplace. Neither of them has a physical meaning unless we problems of troublesome algorithms from closed-form solutions.
feel comfortable in the imaginary domain. But all of them are These can be obtained in two possible ways:
valuable mathematical tools. The Laplace transform’s ability to
• Either we let the independent variables of the algorithm go to
Correspondence to Z. Fazarinc (z.fazarinc@comcast.net). zero and then solve the resulting differential equation by means
© 2010 Wiley Periodicals, Inc. of the Laplace transform,

75

Replacing dg(t)/dt = −ωh(t). Using the substi. The common these equations are easily digested by the computer as for −∞ example: for (n = 0. s  +∞ When presented in a language using arrayed variables Substituting this we get h(n) = 1 2␲j H(z)zn−1 dz. Its n[ns] -2 engineering principles and a formal solution of a generic second order difference equation appearing throughout the paper will be -3 found in the Appendix. equation We take the backward difference [1] of the first equation. who like him have encountered similar problems in their practice. These are now easily converted into discrete again the continuous time t with its discrete equivalent n in the forms  +∞ inverse Laplace transform we obtain h(n) = 1 H(z)esn ds. computers for misinterpreting their logical inputs or gave up in despair. There is not much doubt that we +∞ are dealing with an unstable algorithm. So if we wish the frequency to be say n=0 50 MHz we must choose ω to be 2␲50 × 106 scaled down by the power of nine or ω = 0. 4 transform. -1 0 5 10 15 20 25 tion. It is convenient to first split the second order Equa- n=0 tion (3) into two first order equivalents as dh(t)/dt = ωg(t) and Let us now turn our attention to the inverse.The choice of 30 increments is arbi-  +∞ trary and if we think of them as nanoseconds the scaling is nine H(z) = h(n)z−n (1) orders of magnitude. To this end we must derive the numeric expression Z-TRANSFORM for h(n) as seen by the computer while executing our simple algorithm. Figure 1 Result of direct digitization of harmonic motion equation. The Laplace transform solution of this equation is The causal Laplace transform H(s) of a time function h(t)    +∞ 1  dh(t)  is defined as H(s) = h(t)e−st dt and its inverse as h(t) = h(t) = h(0)cos ωt +   = h(0)cos ωt + g(0)sin ωt (4) ω dt t=0 0 1  +∞ st An algorithm for computer generated harmonic motion or for 2␲j H(s)e ds where t is time and s is a complex variable. This is the causal forward Z-transform. n < 30. But this  +∞ task soon encounters a surprising number of unpredictable obsta- discrete forward transform H(s) = h(n)e−sn . We intend to demonstrate that the Z-transform lets us diagnose such misbehaved algorithms and then point to their HARMONIC MOTION ALGORITHM DERIVED WITH possible cures. The result of executing this and algorithm with these numbers and with initial conditions h = 0 and g = 1 is plotted in Figure 1. n++) {h[n + 1] = h[n] + ω*g[n]. for evaluation.}. cles. From it we may then gain an insight into the origin of The harmonic motion. which we will address with the Z-transform. First we will show a simple derivation of the Z-transform. We will show that this approach delivers more reliable and relevant answers. d 2 h(t) Then we substitute the second equation for the bracketed term + ω2 h(t) = 0 (3) dt 2 and get after some simple algebraic manipulation the second order . 1 h(n) = H(z)zn−1 dz (2) This author has met many frustrated designers of physics 2␲j z=−∞ algorithms. 2␲j 0 h(n + 1) − h(n) = ωg(n) g(n + 1) − g(n) = −ωh(n) (5) The differential ds is found from the definition z = e as ds = dz/z.314159. which amounts to h(n + 2) − 2h(n + 1) + h(n) = ω[g(n + 1) − g(n)]. They have either blamed themselves for not More about the practice of Z-transformations is found in the understanding the calculus of discrete mathematics. We will demonstrate 0 the Z-transform approach to create closed-form solutions on two examples in engineering physics and three in mathematical educa. blamed their Appendix. Z-transform definitions are then g[n + 1] = g[n]−ω*h[n]. The first obvious n=0 idea that offers itself is to convert the differential harmonic equa- tution z = eS we can write an equivalent H(s) function but in terms  +∞ tion into its discrete equivalent and then submit it to the computer of z as H(z) = h(n)z−n . which is at the root of many resonance instability. 3 h(n) 2 It is obvious that both methods are subject to low order and linear cases and are therefore applicable only to small separable 1 components of more complex algorithms.76 FAZARINC • Or we solve the difference equation directly by mean of the Z. Z-TRANSFORM DIRECTLY FROM LAPLACE In it h(t) is the function subject to harmonic behavior and ω TRANSFORM is the circular frequency. We evaluation of trigonometric functions in general is desirable and −∞ discretize time t into n integer increments to obtain an equivalent may be found directly from the harmonic motion equation. phenomena in physics is captured in the second order differential The closed form solution for h(n) can be derived from (5).

DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 77 difference equation for h(n) in the generic form h(n + 2) − 2h(n + 1)A + h(n)B = 0 with A = 1 and B = 1 + ω2 (6) The closed-form solution to (6) is found in Equation (A15) of the Appendix as  h(1) − h(0)  h(n) = (1 + ω2 )n/2 h(0)cos(nϕ) + sin(nϕ) ω ϕ = tan−1 ω (7) A comparison with (4) demonstrates a reassuring similarity except for the divergence factor (1 + ω2 )n/2 . h(n + 1) = h(n) + ωg(n) g(n + 1) = g(n) − ωh(n) Substituting A and B from (11) we find after some lengthy h(n + 1) = h(n) + ωg(n) g(n + 1) = g(n) − ωh(n + 1) but elementary algebra the following answer to our function h(n + 1) = h(n) + ωg(n + 1) g(n + 1) = g(n) − ωh(n) n/2 h(n + 1) = g(n) + ωg(n + 1) g(n + 1) = g(n) − ωh(n + 1) 1 + abω2 h(n) = There are many other possibilities available as combinations 1 + (1 − a)(1 − b)ω2 of the pairs shown above and we propose a generalized form that . We wish now to address this problem from a generic viewpoint of discrete calculus. Figure 2 Frequency error as a function of digitization interval for various strate this let us write our algorithm (5) in a few possible ways. To demon. The discrete domain is beset with problems originating in the basic question of what happens before something else. This grows with the number of executions n and is obvioulsy responsible for the insta- bility of our algorithm (5). sets of parameters.

easy to see that this factor becomes unity when we set a + b = 1. in turn. which after This.  includes them all h(0)(a − b)ω/2 + g(0) × h(0)cos nϕ +  sin nϕ h(n + 1) = h(n) + ω[ag(n) + (1 − a)g(n + 1)] (8) 1 − (a − b)2 ω2 /4  g(n + 1) = g(n) − ω[bh(n) + (1 − b)h(n + 1)] (9) −1 1 − (a − b)2 (ω/2)2 ϕ = tan ω (13) 1 − (a + b − 2ab)(ω)2 /2 Factors a and b may adopt any values desired. Equation (13) exhibits again the By inspecting it we should then be able to decide what values our divergence term that grows with the number of executions n. It is factors a and b we should adopt to match the true solution (4). . To accomplish this we take the first difference of (8). We will now combine these two equations into a second order difference equa. In deriving (13) we have evaluated the term h(1) from (8) tion containing only h(n) and then derive its closed-form solution. produces the new expression simple algebraic operations yields. and (9) by setting n equal to zero.

there are numerous methods that . The case a = b = 0. h(0)(a − 1/2)ω + g(0) h(n + 2) = 2h(n + 1) − h(n) + ω{a[g(n + 1) − g(n)] h(n) = h(0)cos nϕ +  sin nϕ 1 − (a2 − a + 1/4)ω2 + (1 − a)[g(n + 2) − g(n + 1)]} (14) The bracketed expressions are directly available from (9) and when inserted into the above we get. which matches the continuous solu- tion (4) quite well with the exception of the argument ϕ. Of course. If we now set a to 0. again after some straightfor.5 then the above becomes h(n) = ward but messy algebra the expression h(0)cos nϕ + g(0)sin nϕ. This h(n + 2) − 2h(n + 1)A + h(n)B = 0 (10) differs from ω and is as such a source of frequency error with respect to it. In Figure 2 we have plotted the fractional frequency error (ϕ − ω)/ω for the divergent case a = b = 1 and for cases 1 + abω2 − (a + b)ω2 /2 A= and that produce stable. 1 + (1 − a)(1 − b)ω2 With this much knowledge about the effect of our factors a 1 + abω2 and b we can now return to our algorithm (8) and (9) and make B= (11) it stable by selecting the proper sequencing. non-divergent solutions with a + b = 1.5 1 + (1 − a)(1 − b)ω2 matches the continuous solution (4) and would be written as We have encountered this equation earlier and found its solu- g(n) + g(n + 1) tion in (A15) as h(n + 1) = h(n) + ω 2 h(1) − h(0)A h(n) = h(0)Bn/2 cos nϕ + Bn/2 √ sin nϕ h(n) + h(n + 1) B − A2 g(n + 1) = g(n) − ω 2  ϕ = tan −1 B − A2 /A (12) This happens to be an implicit algorithm that the computers do not accept kindly.

ral arguments for acceleration D and velocity v because a number of choices are available. The specific solution of this equation is then as given by A constant mass m. the algorithm of accelerated the form of (22) when A = B = 1 and f(n) is equal to the RHS of motion based on Newton’s Laws. n++) one encountered in the previous section. we have no reason to prefer any one of them. required for inverse s(n + 1) yielding Z-transformation to orders of less than five. which calls for elimination of veloc- be used to develop algorithms of prescribed behavior. We will apply it to the most frequently encountered tasks We recognize Equation (A10) in the Appendix as possessing in simulations of dynamics. Here we could also use the acceler- deal with implicit algorithms but we can circumvent them. limits the extraction of poles. n < 30. g[n + 1] = g[n] − ω*h[n + 1]. This time we will use the {h[n + 1] = h[n] + ω*g[n]. yet still ation D(n + 1) and velocity v(n + 1) or their previously evaluated satisfy the stability condition in a very straightforward way by values D(n) and v(n) or we could also use some combinations choosing a = 1 and b = 0.}. that is.78 FAZARINC 1.} Now a expected answer (18) as the target. To this end we choose as yet keen eye of a computer programmer would quickly note that the undefined fractions of the two primary choices with the parameters arrays can be eliminated for this specific case so that the natural a and b determining their contributions as computer sequencing of statement evaluations can be exploited. The resulting algorithm is then thereof. again employ the parameter optimization procedure. This algorithm with same parame- ters as before produces Figure 3 which meets our expectations for s(n + 1) = s(n) + bv(n) + (1 − b)v(n + 1) (21) extended periods of time [2]. We then simply write the program as for (n = 0. Consequently the tech- nique is also limited to low orders. respectively by equiv- 0 5 10 15 20 25 alent backward differences -0. In the next section we show another + (1 − b)[v(n + 2) − v(n + 1)] example of Z-transform’s power to accomplish this task.5 v(n + 1) = v(n + 1) − v(n) = D -1 n[ns] s(n + 1) = s(n + 1) − s(0) = v (19) -1.5 In (19) we have intentionally avoided specifications of tempo- Figure 3 Results of correct digitization of the harmonic motion equation.5 An algorithm to simulate accelerated motion on a computer 0 can be elicited from Equations (16) and (17). and a position differential With this the inverse transform is ds(t) = v(t)dt (17) s(n) = s(0) + v(0)n + a(1 − b)nD(0) + (1 − a)(1 − b)nD(1) + [abD(n) + (a + b − 2ab)D(n + 1) A direct integration of (16) and (17) produces for the position of the object +(1 − a)(1 − b)D(n + 2)] ∗ (n − 1) (23) . But there is no known technique s(n + 2) − s(n + 1) = s(n + 1) − s(n) + b[v(n + 1) − v(n)] other than trial and error that can be employed to complex simula- tion algorithm development. g = g − ω*h. n++) v(n + 1) = v(n) + aD(n) + (1 − a)D(n + 1) (20) {h = h + ω*g. Therefore we will This can now be cast into an appropriate computer lan. We do this by taking the first difference of algebra though. subjected to force F(t) experiences an (A16) in which the asterisk * denotes the convolution in the n acceleration domain D(t) = F (t)/m (15) s(n) = s(0)(1 − n) + ns(1) + [abD(n) + (a + b − 2ab)D(n + 1) This acceleration generates a velocity differential + (1 − a)(1 − b)D(n + 2)] ∗ (n − 1) dv(t) = D(t)dt (16) The s(1) term can be obtained directly from (20) and (21) for n = 0. The bracketed expressions are easily found from (20) and their substitution produces ACCELERATED MOTION ALGORITHM DERIVED WITH s(n + 2) − 2s(n + 1) + s(n) Z-TRANSFORM = abD(n) + (1 − a)bD(n + 1) + a(1 − b)D(n + 1) In this example we will demonstrate the use of convolution + (1 − a)(1 − b)D(n + 2) (22) to derive algorithm parameters when the driving function is ill defined. General ity terms from (21). s(1) = s(0) + v(0) + a(1 − b)D(0) + (1 − a)(1 − b)D(1). We have demonstrated this earlier before arriving at forms (8) and (9). (22). In order to make a comparison with (18) possible we need a The forgoing example illustrates how the Z-transform may closed form solution for s(n). h(n + 1) − h(n) = ωg(n) and g(n + 1) − g(n) = −ωh(n + 1) Without knowing the outcomes produced by a given choice. n < 30.5 t t s(t) = s(0) + v(0)t + dt D(t)dt (18) 1 h(n) 0 0 0. similar to the guage as we have done before for (n = 0.

such a sequence would con- sion has nothing to do with mathematical practice but only with sist of infinitesimally small steps occurring at infinitesimally small common sense. This type of acceleration can be employed to represent − (1 − a)(1 − b)[(n + 1)D(0) + nD(1) + D(n + 2)] a multitude of cases. Then our inverse transform becomes s(n) = s(0) + v(0)n + a(1 − b)nD(0) + (1 − a)(1 − b)nD(1)  n + [abD(j) + (a + b − 2ab)D(j + 1) j=0 + (1 − a)(1 − b)D(j + 2)](n − 1 − j) (24) It is convenient to use the following easily proven identities 0   n D(j + 1)(n − 1 − j) -1 0 1 2 n j=0  n = [D(j)(n − j)] − nD(0) − D(n + 1) j=0 Figure 4 Stepwise acceleration in discrete domain. When we perform the integrations indicated in (18) we j=0 obtain for the object’s position. For the continuous domain. We can We will drop the terms D(n + 1) and D(n + 2) from the above think of an arbitrary force as a sequence of steps. the expression + (1 − a)(1 − b)(n + 1 − j)] (25)  Without specifying the acceleration function D(j) one cannot n2  s(n) = s(0) + v(0)n + Ds (26) further expand expression (25). So let us now do it for two j=0 kinds of accelerations and see what values the factors a and b have to adopt to force the algorithm (25) into alignment with Newton’s  n = [D(j)(n + 1 − j)] − (n + 1)D(0) − nD(1) − D(n + 2) laws contained in (18). This deci. or n + 2. If we think of it as having started at minus infinity. But we do have the freedom to 2 n≥0 specify any acceleration within reason to find the resulting position s(n) of the object in question. we have a constant acceleration case at hand. j=0 Substitution of these into (24) yields Step Acceleration s(n) = s(0) + v(0)n + a(1 − b)nD(0) + (1 − a)(1 − b)nD(1) We define the step acceleration as Ds (n) = Ds u(n) where u(n) is the unit step function  n  0 if t < 0 + [abD(j)(n − 1 − j) + (a + b − 2ab)D(j)(n − j) u(n) = j=0 1 otherwise + (1 − a)(1 − b)D(j)(n + 1 − j)] It is illustrated in Figure 4 as a continuous function by dashed lines and as its discrete equivalent by dots appearing at unit time − (a + b − 2ab)[nD(0) + D(n + 1)] intervals. another.  n D(j + 2)(n − 1 − j) outcome of a variety of excitations. whenever we wish to examine an j which results in the following definition of the step acceleration .  n The acceleration is equal to Ds for all times equal to or greater + D(j)[ab(n − 1 − j) + (a + b − 2ab)(n − j) than zero. expressed at discrete times. following one expressions because they are meaningless to our goal. In conformance with its notation we must use the summation index This can be a real time saver. To enable the comparison of results we must limit ourselves s(n) = s(0) + v(0)n + (a − 1)nD(0) − (1 − a)(1 − b)D(0) to discrete times t = n where the information is available to us. Consequently. finite steps. In the discrete domain. arrived at this universal answer by means of the use of convolution. It is namely quite obvious that the position s(n) at time intervals. which for  ∞ Ds    our case takes on the form f (j) ∗ (n − 1) = f (j)(n − j − 1) j=0 where f(j) stands for the bracketed term in (23). It should be pointed out that we have We will now substitute the same step acceleration into (25). occurring at unit time n cannot possibly depend on accelerations at later times n + 1 intervals would represent such an arbitrary acceleration function. DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 79 We must now produce an explicit form of the discrete con- volution by using Equation (A9) from the Appendix. including zero braic manipulation we end up with and constant acceleration. the results obtained for a single step will be repre- Combine the sums and after an elementary but lengthy alge. sentative of a whole array of acceleration functions.

In the above we have again used the discrete notation for But fast and large fluctuations of acceleration might not be served time. we get s(n) = s(0) + v(0)n − (1 − a)(1 − b)Ds + (a − 1)nDs   (n − 1)n   n(n + 1)  +Ds ab + (a + b − 2ab) 2 2  n(n + 1)  + (1 − a)(1 − b) +n+1 2 After another bout with algebra we end up with this simple Di  answer n2 s(n) = s(0) + v(0)n + Ds + n(0.5 − b) (27) 2 A comparison with (26) suggests the answer b = 0. No restriction on a is placed by (27). for both a and b. When we plug this into (20) and (21) we get the algorithm for computation of 0     accelerated motion that matches Newton’s laws for a step input. impulsive acceleration to (25) the summation term has a value only Such accelerations arise frequently in nature when particles are when the index j is zero. which is defined as it also correctly predicts the inertial motion in absence of external acceleration forces.5 to obtain a match. that is. unity value for Ds and 0. We will return to this question in the next substituted into (18) we get the following answer for the position section where we address the impulsive acceleration.  1 if n = 0 Independence from a is understandable for our particular step ␦ (n) = 0 otherwise function case and extreme values can be used without ill effects. It appears at a given time instant and then vanishes. In it we have colliding. If we apply the same time domain. thermodynamics being one of them. The discrete solution (28) correctly predicts the We denote the impulsive acceleration by Di (n) = Di ␦(n) motion at discrete points in time when a step input is applied.5 natural behaviors. s(n + 1) = s(n) + (28) 2 In Figure 5 we see a plot produced by (28). of the object in discrete notation Impulsive Acceleration s(n) = s(0) + v(0)n + Di n|n≥0 (29) The impulsive acceleration is illustrated in Figure 6 by dashed lines Equation (29) will be used as reference for establishing the for the continuous domain and by discrete points for the discrete parameters a and b in (25) for this case. The simulation of such motion addresses a number of used zero initial values for v and s. −1 0 1 2 v(n + 1) = v(n) + aD(n) + (1 − a)D(n + 1) v(n) + v(n + 1) Figure 6 Impulsive acceleration in discrete domain. But where ␦(n) is the Kronecker delta function. When the corresponding acceleration Di ␦(n) is well by large values of a. Then (25) assumes the following form . t = n.80 FAZARINC for this case 100  0 if j < 0 D(j) = Ds otherwise 80 Consequently our expression (25) becomes s(n) s(n) = s(0) + v(0)n + (a − 1)nDs − (1 − a)(1 − b)Ds 60  n + Ds [ab(n − 1 − j) + (a + b − 2ab)(n − j) j=0 40 + (1 − a)(1 − b)(n + 1 − j)] 20 Applying the following identities  n n(n − 1)  n n(n + 1) n−1−j = n−j = 0 2 2 j=0 j=0 2 4 6 8 10  n n(n + 1) n+1−j = +n+1 n 2 j=0 Figure 5 Simulation of uniformly accelerated motion.

The three graphs tell us impose an initial value of −0. Applying these values to (28).5 in (30). encountered in the Appendix as (A10) when f(n) = 0. the number of periods provide any information at half integer points. which calls for a = 1 and b = 0 but can parameter is the ratio P/I as defined earlier. respectively while allowing the withdrawals of accelerated motion but to demonstrate the technique for analysis to increase at the inflation rate i.5 =0 (33) 1.5 sented in the form s(n) m(n + 2) − m(n + 1)(1 + P) + m(n)P + [w(n + 1) − w(n)]P 2. So. a closer match would 1 − (I/P) be desired. invested at the rate p will behave with shown in Figure 8. While agreements are  1 − (I n /P n )  m(n) = P n m(0) − w(0) (35) achieved at discrete points as demanded. the ratio of P to I is 1. The the result of our analysis. and w(n) the withdrawal in the nth year. one twentieth or one fortieth of The object of this analysis was not to computerize the physics the initial principal. The available investment offers an thrust of this paper.5 on s(0).5 The square bracketed expression equals w(n)i according to (32). According to (A12) the poles of Equation (34) are real and easily evaluated as z1 = I. How long will our 3 money last? An alternate question we want the answer to is how s(n) much money is left after n years. To achieve this we must have invested at 1. We on a financial example of Z-transform application.053 times ALGORITHM 1. I =1+i Take the first difference of (31). Assume that we have at our disposal an amount the money will run out in about 22 years.01. 2 We define the variable m(n) to be the money amount after n years. time if we start drawing one tenth. It is apparent that a match between (30) and (29) demands With this and after some elementary algebra the final solution to a = 1 and b = 0. DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 81 annual appreciation rate of 100 × p% but we predict an inflation of 100 × i% and want our income to follow it. In it the continuous Newton solution is shown as a dashed line. From (31) we get also w(n)iP = m(n)iP − m(n + 1)i.04/1.053 as seen from USE OF Z-TRANSFORM TO DEVELOP A FINANCIAL the graph. We will therefore leave this section and take and choose the initial principal m(0) to be one million dollars. whose only pleasure are numerous (A13) as cancellations of terms.03 or about 8.5 m(n + 2) − m(n + 1)(I + P) + m(n)IP = 0 (34) 2 n 0 1 This is recognized as a second order difference equation Figure 8 Improved match to expected motion by initial condition choice. With these quantities we can define the following relationships: 1 Money after nth withdrawal m(n) − w(n) 0 Money after appreciation m(n + 1) = [m(n) − w(n)]P (31) 1 2 n Inflated withdrawal w(n + 1) = w(n)I (32) Figure 7 Discrete motion following a stepwise acceleration compared to theoretical expectation. with zero initial our problem is conditions and Di = 1. With this we can write (33) as 0. Unfortunately the discrete algorithm (28) does not is plotted in Figures 9–11 as function of n. While interesting any of money m(0) and know that we need an annual amount of w(0) further discussion of this example would distract from the main to live on at today’s prices. which is m(n + 2) − m(n + 1) = m(n + 1)P − m(n)P − w(n + 1)P + w(n)P and can be pre- 3. If on the other hand we can get only 4% interest on our investment. The solution is given in after some elementary algebra. A shift of the graph to the left by 0.5%. divided by the inflated withdrawal from (32) b = 0. The shorthand used in the above : P = 1 + p.000 dollars the first year but then withdraw every year 3% more to allow for the inflation. the required ratio P/I is 1. Take an example from Figure 10 of relevant algorithms. The lower This time we want to derive an algorithm that will address an curve in the middle graph which applies to this case tells us that investment issue. results in Figure 7. which produces the plot how the initial principal m(n). A = (I + P)/2 and B = IP. we must retain to which the appreciation rate p and the inflation rate i apply. If we want to preserve the initial capital forever. I n+1 − P n+1  I +P  In − P n m(n) = m(0) + m(1) − 2m(0) s(n) = s(0) + v(0)n + Di a(n − b) (30) I −P 2 I −P m(1) is obtained from (31) by setting n to zero as m(0)P − w(0)P. z2 = P.03 = 1. start drawing 50. .5 would certainly produce a desirable pattern and would be achieved by choosing Expression (35).

z) where the respective z domain variables are u for s and z for n. p)uP(u. n) + (1 − inflation rate I for an initial ratio m/w = 40. But the facts are different as we will prove by an analysis. n) denoted by P(u. The probability of this happening is p(s. p(s − 1. n − 1) times the probability of success in one trial. which from (36) amounts to p(1. • Or we had s successes in the previous n − 1 trials. Advance both s and n by one increment and obtain p(s + 1. n)]] = P(u. p(1. n) must be (1 − p)n . that is. 1) = p (36) Probability of a failure in a single trial p(0. [3]. that is. followed by a failure in the nth trial. We can also say that the failure is a certainty with no trials. The two probabilities add up to yield p(s. Consequently the term up(0. To this end inflation rate I for an initial ratio m/w = 20. n) on the right. n) − (1 − p)up(0. 0) = 0. that is. which means that we can set p(1. 0) = p(2. 1) = 1 − p (37) Figure 9 Ratio of investment status m to inflated annual withdrawal w as Probability of s successes in n trials p(s. An example of such in physics is found in Ref. n)] = P(u. n) at a game for which the probability of a success in a single trial. p(s. followed by a success in the nth trial. Then Probability of a success in a single trial p(1. n) = p(s − 1. n) × (1 − p) (39) Figure 10 Ratio of investment status m to inflated annual withdrawal w This is a partial difference equation in variables s and n. for an initial ratio m/w = 10. which will at the same time allow us to demonstrate the use of a two dimensional Z-transform. n + 1) = pP(u. 1) is known. z) (40) The last expression is the doubly transformed p(s. n) + (1 − p)uP(u. n − 1) × p + p(s. n)] = Zn→z [Zs→u [p(s. n) × p + p(s + 1. n + 1) = p(s. The prob- ability of this happening is p(s − 1. n − 1) times the probability of failure in a single trial given by (37) as p(0. 0) = p(3. p(0. n + 1) on the left cancels the term Figure 11 Ratio of investment status m to inflated annual withdrawal w (1 − p) up(0. p(s. Let us now apply (A4) to do the first transformation of (39) from s into the u domain. Here we will attempt to find the probability of the number of successes s in n trials. n) The quantity p(0. n + 1) = pP(u. n) is the probability of zero successes in n trials. Let us designate this probability as being equal to p. we will introduce some needed z-domain variables Zs→u [p(s. n + 1) − up(0. n) Zn→z [P(u. We transform this now with respect to n using the notation set down in (40) and by the method suggested in (A4) . It as a function of time in years n for different ratios of investment P and can be solved by a two-dimensional Z-transformation. n − 1) × (1 − p). n − 1). 0) = 1. n) to be found (38) a function of time in years n for different ratios of investment P and inflation rate I. n − 1). We can say with certainty that there can be no successes without trials. The simplified equation after as a function of time in years n for different ratios of investment P and the first transformation is now uP(u. 0) = · · · = p(s. 1) = p. Because the probability of one failure in one trial is (1 − p) we immediately conclude that p(0. 1) = 1 − p. This yields uP(u. Finally we can state that s successes in n trials can be achieved in two possible ways: • Either we had s − 1 successes in previous n − 1 trials.82 FAZARINC APPLICATION OF Z-TRANSFORM TO GAMBLING STATISTICS Plain intuition would tell a gambler that the odds of winning are steadily improving with the number of attempts at the same game. n).

is this to the power of n.3 the curve s = 1 has a ps d s zn value of 0. Note n! p(s. z) = p(s.0 01 56 10 11 15 16 20 21 25 26 30 asterisk subscripted to identify the domain. The best chances of dzs (n − s)! success. −1 1 d The case s = 0 is truncated at the top but it starts out at p(s. . n) = ps (1 − p)n−s (42) that we are faced with a single but (s + 1)-st order pole at z = 1 + p. n) = p(s. that is. 0) ∗s (1 − p)n−s s! (n − s)! long time. it is certain that there will be zero successes with zero trials. After substitution of the pole location for u we get Consequently the above summation contributes zero for all ps z values of j except when j = s. z) − zuP(u.3 at n = 1. n) = p(s − j. 0) ∗s Zu→s u − p/(z − 1 + p) one success in one trial is 30%. Invoking (A9) seems appropriate but the as one can read from our first graph. Substituting in the above and inserting the z value at the pole we get The most important lesson for gamblers to learn from these ps n! graphs is the fact that it does not pay to bet on the same game for a p(s.4 1 z uP(u.1 inverse transforms of these two factors must then be convolved in the s domain and we indicate the intended operation with an 0. 0)(zu/z − 1 + p) p = 0.3. z) is 0.2 With this aid we can write for the probability of s successes in n trials 0. From the same graph we confusion in many variables involved calls for some handholding.15 where the two functions to be convolved are 2 0.0 j! (n − s)! 0 10 20 30 40 50 j=0 n The summation extends over all values of j. 0) ∗s for the probability of s successes in n trials (z − 1 + p)s+1 The next step is now to invert the above with respect to z. When also combined with (A8) we get Expression (42) is plotted in Figure 12 for the case p = 0.5 s=0 f1 (s) ∗s f2 (s) = f1 (s − j)f2 (j) 0. 0) P(u. The 0. z) = p(s. p(s.85 in this = n(n − 1)(n − 2) · · · (n − s + 1)zn−s = zn−s case as compared to 0. The curve s = 0 drops off slower than in the d s zn n! previous case because the probability of a failure is 0. But we have Figure 13 Probability of s successes in n trials when the probability of concluded at the very outset that p(s. z). 0) ∗s us−1 p z−1+p u = z−1+p except zero. First 4 5 we do it with respect to u into the s domain. the probability of n failures last transformation because they are constants in the z − n domain. As one would expect for p = 0. We have a single pole at 0. 0) and f2 (s) = (1 − p)n−s 4 s! (n − s)! 5 0. z)(z − 1 + p)s+1 zn−1 ]z=1−p s! dzs The s = 0 case requires a failure at each of the n trials.7 in the preceding case. on the other hand are somewhat lower for obvious reasons.1  ∞ pj n! p(s. The curve s = 0 exhibits this exponential Consequently we can write behavior. n  zu/(z − 1 + p)  −1 Figure 12 Probability of s successes in n trials when the probability of P(s.15.3 P(u. 0) is zero for all values of s one success in one trial p is 15%. 1). z) + (1 − p)uP(u.4 1 j=0 p = 0. 0) ∗s | For other probabilities of a single success p we can construct a s! dzs z=1−p whole family of graphs. which is easily evaluated when (A7) is invoked. But the numerator contains a product of two functions of u. DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 83 zuP(u.3 ps n! 3 f1 (s) = p(s. zu | = p(s. n) = p(s. We show in Figure 13 only one additional The sth order derivative of Zn is easily found to be case for p = 0.2 u = p/(z − 1 + p) which makes the transformation quite simple. For example. Because The convolution and all functions of s are not affected by this the probability of a failure is (1 − p). n) = 1. p(s.15.3 3 We must now perform a double inverse transformation. The same is true after only three trials if p = 0. P(u. z) = = (41) 2 z u − p − (1 − p)u u − (p/z − 1 + p) 0. Transcribe (A9) for our case as follows  ∞ 0. the chances of winning are rapidly diminishing The only task left now is to convolve the two functions as beyond that point. s!(n − s)! There is also just a single residuum. if one has not won once after some 8 trials when p = 0.5 easily extracted from this s=0 0. This leads instantly to the final answer P(s. s Recall that p is the probability of one success in one trial or p(1. n) = Zz→n [P(s. 0) (1 − p)n−j 0. z)] = [P(s. 0) = pP(u.3 indicated by the asterisk.

CONNECTIVITY which after integration produces ln F (z) = ln z − z + c. We can now the inverse transform transcribe our equation in the z-domain as zL(z) − zl(0) = L(z) + ∞ ∞ z/(z − 1)2 . Using the transform of l(n + 1) the expression Z[l(n + 1)] = Z[l(z)] − z (2) and substituting (45) for the transformed function we can write l(0) and for the transform of n the expression z/(z − 1)2 . If this was a gambling lesson there 1 we can find the transform of n × f(n) to be −zdF(z)/dz. Expression (46) with the argument (n − 1) is . F(z) If we call the number of initial links l(n) then the number of links is the factorial function in the z domain.84 FAZARINC find that if we have not had two successes in six trials the chances Define the transform of f(n) as Z[f(n)] = F(z) and using Table of that happening are fading. Using that the transform exists only when the lower limit of z is zero. This is further developed into If we have n nodes connected to each other how many total links do we have? F (z) = eln z−z+c = kze−z (45) It is very easy to tell from the illustration Figure 14 that a new node added to the previous n nodes creates exactly n new links. Let us increment n in this Figure 15 Gamma function of a complex number nr + jni . Define Z[l(n)] = L(z) and from Table 1 we get for applicable to any value of n. To bring it back into the between n + 1 nodes will be l(n + 1) = l(n) + n. This answer could have been derived in a less formal way by mathematical induction but not so ∞ for our next example. the resulting expression will be the Z-transform. thus l(0) = 0. Consequently k = 2␲j and we have iar final answer l(n) = n(n − 1)/2. The factorial function’s f (n) = n! = n × (n − 1)! basic def- inition is f (n) = n × f (n − 1). f (n) = zn e−z dz (46) 0 Figure 14 Illustration of why the addition of one new node to the network of n nodes produces n new links. With would be much more to say about Equation (42) but the purpose these definitions we can write the transform of (44) as a differential of this exercise was to demonstrate the use of a two dimensional equation Z-transform. APPLICATION OF THE Z-TRANSFORM TO MATHEMATICS What the factorial function does for integers Euler’s Gamma func- tion does for all real numbers. expression by one to obtain the following difference equation This is the ultimate factorial function applicable to any real f (n + 1) = n × f (n) + f (n) (44) or complex number. A per-partem integration of our modified l(n) = [L(z)(z − 1)3 zn−1 ] | expression yields 2! dz2 z=1 ∞ 1 d2 n 1 k k = z | = n(n − 1)zn−2 | f (1) = 1 = e−z zdz = 2! dz2 z=1 2 z=1 2␲j 2␲j 0 Substitute the value of z at the pole and obtain the famil- for n = 1. For (A7) with k = 3 and z1 = 1 we obtain for the inverse transform this case we can establish the constant k from the well-known fact 1 d2 that the factorial of 1 is 1. leads to the final transformed number of nodes 2␲j 2␲j −∞ −∞ z L(z) = (43) (z − 1)3 A straightforward probing of the integrand reveals that its region of convergence is limited to positive values of z. dF (z) 1 + F (z) − F (z) = 0 dz z APPPLICATION OF THE Z-TRANSFORM TO Separation of variables yields dF (z)/F (z) = dz/(z − 1). including complex numbers. This 1 k f (n) = F (z)z n−1 dz = e−z zn dz. Constant k in (45) arises from the integration constant c. This is a difference n domain we must perform an inverse transformation. There are zero links for zero nodes. It is considered to be a rare function that does not originate in the solution of a differential equation. Because equation for l for which a closed form solution can be obtained with we have set no restrictions on n. We will derive it by means of Z-transform. meaning Expression (43) consists of a third order pole at z = 1.

h(n + 1). in Z[1/n!] = e1/z Z[e−an ] = z−ez −a the z-domain. we get for the trans- Its causality demands.5)/(−1.5 as (4. Its predictive n=0 n=0 power can benefit the development of new algorithms and the troubleshooting of existing ones. statistics and From (A4) we get Z[h(n + 1)] = z H(z) − z h(0) and Z[h(n + mathematics educational insights into the origins of tools relevant 2)] = z2 H(z) − z2 h(0) − zh(1) We can see that the initial condi- to their fields. An equivalent situation is found in the Laplace transform whenever the derivatives of the continu- APPENDIX ous function are present. Closed-form solutions of stud-  k−1 = zk H(z) − zk h(n)z−n (A4) ied algorithms open a window into its long-term behavior and n=0 thereby provide clues for their optimization or repair. If we now add and subtract the term zk h(n)z−n .  ∞ −n Let us apply the acquired knowledge to a trivial example. Z[h(n)] = H(z) = h(n)z n=0 We have invested an amount of money at 5% annual interest. Thus With the aid of examples from different domains we have shown  ∞  k−1 that the Z-transform can serve as a tool for testing validities of Z[h(n + k)] = zk h(n)z−n − zk h(n)z−n small segments of scientific simulation algorithms.05).05.5) = 3.5) = 23. tions h(0). While H(z) does carry the answer to our z(z+1) Z[n2 ] (z−1) 2 Z[an ] = z−a z original question it happens to be in the wrong domain.5 × 1. etc. are carried over into the z-domain whenever the argument of h(n) is incremented. etc. where positive integers (n + 1) = n! As expected it produces the correct k is a positive integer has a transform answer for say (4) = f(3) = 6.5)(−0. It is apparent that a difference equation containing h(n).5 × 2.5) = 2.5 × (0.5) = (0. DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 85 known as the Gamma function [4] If h(n) happens to be a constant c its transform is simply ∞  ∞ z (n) = f (n − 1) = zn−1 e−z dz (47) C(z) = c z−n = c (A3) z−1 n=0 0 It is plotted in Figure 15 for complex values of n. we can n=0 CONCLUSIONS bring the lower limit into alignment with (A1) also. This enables one to n=0 n=k compute for example the factorial of 3. An important recursive relationship (n + 1) = n(n).5)/(−1. is found from a per-partem integration of (47)  ∞ Z[h(n + k)] = h(n + k)z−n ∞  ∞ zn −z (n + 1) n=0 (n) = −zn−1 e−z 0 + e dz = . h(1).5) = In the above we have brought the expression under the sum- (−0. Furthermore the Z-transform offers to students of physics.05 Solving for H(z) we get Z[nh(n)] = −z dz d H(z) H(z) = zh(0)/(z − 1. In 1 Z[h(n + 2)]z2 H(z) − z2 h(0) − zh(1) year this is h(n + 1) = h(n) × 1. . finances. will transform into the z-domain as The Forward Z-Transformation a polynomial in powers of z and solving for H(z) is reduced to As derived in (1) the discrete functions h(n) are causal and as such algebra.5 as (−1. Using (A4) we can transform Z(c) = c z−1 z Z(n) = (z−1) z 2 this into zH(z) − zh(0) = H(z) × 1. Their Z-transform function is then Often the transform of the argument n itself is needed. n n  ∞  ∞ 0 = zk h(n + k)z−(n+k) = zk h(n)z−n √ A value of interest is (0.26 or the factorial of −2. We Z[h(n + 1)] = zH(z) − zh(0) want to know how much money we will have after n years.5) = ␲. that is. vanish for negative arguments. form of the timing argument n the following expression h(n) = 0 if n < 0 (A2) d z z Z[n] = −z = dz z − 1 (z − 1)2 Table 1 Some Z-Transforms This and some other transforms are summarized in Table 1.363. We must transform it back into the n-domain and Z[sin ωn] = z2 −2z z sin ω cos ω+1 how to do that we will learn in the next section. mation sign into conformance with (A1) by changing the lower  k−1 limit. Let us apply (A1) to it and then express the summand in terms of a  ∞ derivative with respect to z as shown next Z[h(n)] = H(z) = h(n)z−n n=0  ∞ d  −n ∞ Z[n] = nz−n = −z z h(1) h(2) h(n) dz = h(0) + + 2 + ··· + n + ··· (A1) n=0 n=0 z z z Using (A3) with the constant being one. For all If a function h(n) has a Z-transform H(z) then h(n + k).

86 FAZARINC Table 2 Some Inverse Z-Transforms It is common to use an asterisk (*) to denote the convolution. Apply the definition of the Z-transform from (1) to each of Z−1 = z1  1  z −z z−z 1 n−1 n−1 the two factors in the above expression and then substitute j = k − n Z−1 (z−z )(z−z  ) = z −z 1 2  1 2 1 2 . −1  so we can symbolize our statement as Z−1 [H(z) × W(z)] = h(n) ∗ Z [H(z)] = h(n) = H(z)z 1 n−1  1  n−1 2␲j dz w(n).

Z−1 (z−z1 ) = (n − 1)zn−2  ∞  ∞ −1  1  (n−1)(n−2)z1 1 2 n−3 Z−1 h(n)z−n w(j)z−j Z (z−z1 )3 = 2 1 n=0 j=0 .

Thus the inverse transform of a product of transforms must  ∞ h(n) ∗ w(n) = h(n)w(k − n) be their convolution. Expression (A7) is by far simpler to use than it may appear at first glance.05n z2 H(z) − h(0)z2 − h(1)z − 2H(z)zA + 2h(0)zA + H(z)B Here h(0) is obviously our initial investment in year 0. A trivial example of its application is our problem. Those acquainted with Laplace or Fourier n=0 transforms will find this very familiar. which we left hanging in the previ. Equation atively simple inversion method for most practical problems in The equation we wish to solve for h(n) is natural sciences. Resz1 . In order to invert H(z) = zh(0)/(z − 1.1 [H(z)zn−1 ]z=1.05. Particular attention was paid This is defined for the kth order pole at z = z1 as to the power of convolution when faced with having to transform 1 d k−1   an as yet undefined function w(n). = Z[f (n)].05). employing discrete convolution. Some more serious problems are addressed in the body of this paper.  ∞  ∞ −1 −n −k n The Inverse Z-Transformation =Z h(n)z w(k − n)z z −1 n=0 k=n We will use the notation h(n) = Z [H(z)] to indicate the inverse transformation. If F(z) happens to a plain Z-transform of w(k − n). which either cannot be trans. It happens to be a single pole thus k = 1. (z−z1 )3 2 . las. Its inverse is then obviously the be analytic its integral is found as function w(k − n) itself. Z−1 Z[w(n)] = w(n) ∗ z  Z[w(n)]  z−z1 1 z −z n−1 n−1 formed or are not fully defined yet. We have shown in (2) that the following integral To the inverse transformation functions of n are just con- accomplishes this procedure  we cancel out the two n powers of z we can stants and after ∞ position the h(n) term outside the transform operator.k [F (z)] = F (z)(z − z1 )k z=z (A7) With these preparations we are now ready to apply the Z (k − 1)! dzk−1 1 transform techniques to solve an all important difference equation Using (A6) we can then rewrite (A5) as that we will encounter in all our examples. But there is more to convo. Then that sum becomes functions is the Cauchy’s residue theorem [5]. In Table 2 we h(n + 2) − 2h(n + 1)A + h(n)B = f (n) (A10) see some of the results arising from it. This is a powerful technique that cannot be overestimated and  (z−z )  −1 Z[w(n)] 1 2 1 (n−1)(n−2)z n−3 Z = w(n) ∗ 1 that we will employ in our examples. We employ (A4) to perform ous section. where f(n) is an arbitrary function of n. we must the transformation of h(n) into H(z) such that Z[h(n)] = H(z). The Discrete Convolution Mathematically the discrete convolution of two functions of n in Table 3 Discrete Convolution the n-domain is equivalent to a product of their transforms in the z- domain. Consequently we end up with the answer ∞  ∞ 1  h(n) ∗ w(n) = h(n)w(k − n) (A9) F (z)dz = Res[F (z)] (A6) 2␲j n=0 −∞ In Table 3 we have summarized some useful inversion formu- Res stands for the residuum at the pole of the function F(z). It allows one to carry over Z−1 [H(z)W(z)]   = h(n) −n−1 w(n) into the z-domain functions of n.  h(n) = Res[H(z)zn−1 ] (A8) Generic Solution of the Second Order Difference Expression (A8) in conjunction with (A7) provides a rel. Z[h(n) ∗ w(n)] = H(z)W(z) lution than this fundamental definition. Therefore we can freely set the A powerful method for evaluation of integrals of analytic lower limit of the second sum to k = 0.05. Then we ∞ 1  ∞  h(n) = H(z)zn−1 dz (A5) have h(n) ∗ w(n) = h(n)Z−1 w(k − n)z−k According to 2␲j n=0 k=n −∞ (A2). statistics and finance. Using these values the residuum is Res1. Later it can be brought back Z−1 = w(n) ∗ z −z 1 2 into the n-domain and can be incorporated into the inverted solu-  1  (z−z )(z−z ) 2 1 2 Z−1 Z[w(n)] = w(n) ∗ (n − 1)zn−2 tion. w(k − n) vanishes for n > k. The first find the residuum of R(z) = H(z)zn−1 at the pole located at relevant recipes are found in Table 1 z1 = 1.05 = h(0)1.

B. pp 47–56. 73–785. Hildebrand. H(z) = + Z[f (n)] × ator defined and derived in (A9). John Wiley & Sons. Introduction to numerical analysis. 2006. The gamma function. Discretization of partial differential equations for com- +j B − A2 Bn/2 (ejnϕ + e−jnϕ ) puter evaluation. “Minicomputer as a circuit-design tool”. DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 87 Extract H(z) Starting  from the setup zn−1 = zn1 /z1 = √ 1 Bn/2 ejnϕ / A + j B − A2 we obtain after some elementary h(0)z2 + h(1)z − 2h(0)zA H(z) = algebraic gymnastics z2 − 2zA + B 1 zn−1 − zn−1 ABn/2−1 + Z[f (n)] × (A11) 1 2 = √ sin nϕ − Bn/2−1 cos nϕ z2 − 2zA + B z1 − z2 B − A2 The two poles of this equation are located at When the above quantities are substituted into expression  (A13) one gets z1. The second term H2 (z) retransforms into a convolution of f(n) with the inverse transform of 1/(z − A)2 . Fazarinc. Artin. 2nd edition. The final solution of our Equation (A1) for Case of Imaginary Poles A2 < B this particular case of B = A2 is now the sum of the two partial Expression (A13) applies to this case also except for the value of answers poles. [4] E. which is easily eval- uated or read from Table 2 as (n − 1)An−2 . zn1 − zn2 1 = √ Bn/2 sin nϕ [6] E. . Simulation zn+1 1 − zn+1 2 = ABn/2 (ejnϕ − e−jnϕ ) Councils. February 1972. The inverse transform of (z − A)2 (z − A)2 the last expression is found in Table 2 in the form (zn−1 1 − = H1 (z) + H2 (z) zn−1 2 )/(z1 − z2 ) and the tentative solution of our Equation (A10) is then We will invert each term separately and start with H1 (z). zn+1 − zn+1 A 1 2 = √ Bn/2 sin nϕ + Bn/2 cos nϕ 1964. Margan. z1 zn1 = A + j B − A2 Bn/2 ejnϕ Dover Publications. 1958. other than binomial expansion of the exponents can be devised for this case. No further = h(0)A (1 − n) + nh(1)An−1 n simplifications.  [3] Z. 1956. Hence Z−1 [H2 (z)] = f (n) ∗ (n − 1)An−2 . New York. A +f (n) ∗ √ Bn/2−1 sin nϕ + Bn/2−1 cos nϕ B − A2 (A15) Case of Real Unequal Poles A2 > B Equation (A8) explains the first two terms in the next expression h(n) = H(z)(z − z1 )zn−1 |z=z1 + H(z)(z − z2 )zn−1 |z=z2 Case of Identical Poles A2 = B  1  + Z−1 [Z[f (n)]] ∗ Z−1 In this case z1 = z2 = A and expression (A11) becomes (z − z1 )(z − z2 ) h(0)z2 + h(1)z − 2h(0)zA 1 In the third term the asterisk denotes the convolution oper. Staric and E. Holt.2 = A ± A2 − B (A12) h(1) − h(0)A n/2 The inverse transformation of (A11) is critically dependent h(n) = h √ B sin nϕ + h(0)Bn/2 cos nϕ B − A2 on the nature of quantities A and B and we will treat the three possible cases separately. Springer. zn+1 − zn+1 zn − zn2 which exhibits a double pole at A and according to (A6) transforms h(n) = h(0) 1 2 + [h(1) − 2h(0)A] 1 into z1 − z2 z1 − z2   d  zn−1 − zn−1 Z−1 [H1 (z)] =  H1 (z)(z − A)2 zn−1  + f (n) ∗ 1 2 (A13) dz z=A z1 − z2 The pole locations z1 and z2 are given in (A12). I.2 = A ± j B − A2 = Be±jtan B−A2 /A = Be±jϕ (A14) REFERENCES Equation  (A13) relies  on the following expressions zn+1 = √ 1 [1] F. Jury. pp 1-65–1-72. [2] Z. located at  √ h(n) = h(0)(1 − n)An + nh(1)An−1 + f (n) ∗ (n − 1)An−2 (A16) √ −1 √ z1. Inc. Wideband amplifiers. Fazarinc. Sampled-data control systems. p 130. New York.. z1 − z2 B − A2 [5] P. which are now complex. Comput Appl Eng Educ 1 (1992/1993). New z1 − z2 B − A2 York. Rinehart and Winston.

His interest in discrete system analysis led him into exploration of the computer potential for science education. He continues his critical evaluation of the classical approaches to teaching in view of the modern computer technology. He has lectured at more than 100 universities throughout the world on this topic and has organized some of them into consortia. He held a number of management positions at Hewlett-Packard Laboratories in Palo Alto and was a consulting professor of EE at Stanford University. communications and computation with a focus on software engineer- ing.88 FAZARINC BIOGRAPHY Zvonko Fazarinc received his PhD in electrical engineering from Stanford University in 1965. His professional activities involved the fields of measurements. .