You are on page 1of 14

# Z-Transform and Its

Application to Development
of Scientiﬁc Simulation
Algorithms
ZVONKO FAZARINC
620 Sand Hill Road, 417D, Palo Alto, California 94304

Received 23 December 2009; accepted 29 March 2010

ABSTRACT: Engineers and educators ﬁnd in computer simulations a powerful substitute for inability of cal-
culus to produce closed form solutions to modern problems. But the validity of quasi-closed-form solutions
produced by computers remains suspect. While no magic test exists, pieces of algorithms can individually be
examined by the Z-transform [E. I. Jury, Sampled-data control systems, John Wiley & Sons, 1958]. Two examples
from engineering and three from education are presented. © 2010 Wiley Periodicals, Inc. Comput Appl Eng Educ

Keywords: Z-transform; science simulations; algorithm development; physics algorithms; mathe algorithms

INTRODUCTION convert differential equations into their algebraic equivalents is
matched by the Z-transform’s ability to do the same for difference
While natural phenomena often exhibit high complexity, sci- equations.
ence, engineering, and education are able to penetrate many of One may legitimately question the focus on difference equa-
their secrets by studying individual, separable behaviors. These tions, which were seemingly abandoned long time ago. But besides
are commonly described by differential equations, whose closed- the just quoted reasons for dealing with them here are some addi-
from solutions are of high interest. The Laplace transform, which tional ones that may justify the elevation of difference equations
enables us to derive closed-form solutions is limited to linear into a higher rank in computer science, mathematics, and physics.
cases of orders four or lower. That is far removed from the prac- Computers are completely ignorant about integral-
tical needs but there was no way to obtain a closed-form solution differential equations while the difference equations are native
for phenomena of order ﬁve and higher until the advent of the to them. Most algorithms used in scientiﬁc computation are in
computer. fact difference equations. Predicting what they will do when
Today we are able to attack problems of almost unlimited subjected to repeated evaluations is of high interest to designers
overall complexity and produce not just static but also dynamic of simulation algorithms. The Z-transform can give us an insight
closed-form solutions in the form of science simulations. These into their behavior.
are gaining in credibility yet remain untested when predicting A traditional reason for appreciation of difference equations
unobservable outcomes. There are no alternate methods avail- is the fact that most problems of interest were described in terms
able to scrutinize predictions of computer simulations but we of differences, resulting in one or more difference equations. The
can test individual components of the overall algorithmic struc- classical differential equations were then derived from them by
ture for their validity. To this end we need a tool that generates letting the independent variables approach zero. This process made
closed-form solutions of mini algorithmic components of the them less accessible to normal mortals and to computers alike. But
overall system. We ﬁnd in the Z-transform such a tool. This is leaving this controversial view point aside let us focus again on the
a discrete equivalent of continuous transforms such as Fourier fact that valuable insights can be gained into particular execution
and Laplace. Neither of them has a physical meaning unless we problems of troublesome algorithms from closed-form solutions.
feel comfortable in the imaginary domain. But all of them are These can be obtained in two possible ways:
valuable mathematical tools. The Laplace transform’s ability to
• Either we let the independent variables of the algorithm go to
Correspondence to Z. Fazarinc (z.fazarinc@comcast.net). zero and then solve the resulting differential equation by means
© 2010 Wiley Periodicals, Inc. of the Laplace transform,

75

Replacing dg(t)/dt = −ωh(t). Using the substi. The common these equations are easily digested by the computer as for −∞ example: for (n = 0. s  +∞ When presented in a language using arrayed variables Substituting this we get h(n) = 1 2␲j H(z)zn−1 dz. Its n[ns] -2 engineering principles and a formal solution of a generic second order difference equation appearing throughout the paper will be -3 found in the Appendix. equation We take the backward difference [1] of the ﬁrst equation. who like him have encountered similar problems in their practice. These are now easily converted into discrete again the continuous time t with its discrete equivalent n in the forms  +∞ inverse Laplace transform we obtain h(n) = 1 H(z)esn ds. computers for misinterpreting their logical inputs or gave up in despair. There is not much doubt that we +∞ are dealing with an unstable algorithm. So if we wish the frequency to be say n=0 50 MHz we must choose ω to be 2␲50 × 106 scaled down by the power of nine or ω = 0. 4 transform. -1 0 5 10 15 20 25 tion. It is convenient to ﬁrst split the second order Equa- n=0 tion (3) into two ﬁrst order equivalents as dh(t)/dt = ωg(t) and Let us now turn our attention to the inverse.The choice of 30 increments is arbi-  +∞ trary and if we think of them as nanoseconds the scaling is nine H(z) = h(n)z−n (1) orders of magnitude. To this end we must derive the numeric expression Z-TRANSFORM for h(n) as seen by the computer while executing our simple algorithm. Figure 1 Result of direct digitization of harmonic motion equation. The Laplace transform solution of this equation is The causal Laplace transform H(s) of a time function h(t)    +∞ 1  dh(t)  is deﬁned as H(s) = h(t)e−st dt and its inverse as h(t) = h(t) = h(0)cos ωt +   = h(0)cos ωt + g(0)sin ωt (4) ω dt t=0 0 1  +∞ st An algorithm for computer generated harmonic motion or for 2␲j H(s)e ds where t is time and s is a complex variable. This is the causal forward Z-transform. n < 30. But this  +∞ task soon encounters a surprising number of unpredictable obsta- discrete forward transform H(s) = h(n)e−sn . We intend to demonstrate that the Z-transform lets us diagnose such misbehaved algorithms and then point to their HARMONIC MOTION ALGORITHM DERIVED WITH possible cures. The result of executing this and algorithm with these numbers and with initial conditions h = 0 and g = 1 is plotted in Figure 1. n++) {h[n + 1] = h[n] + ω*g[n]. for evaluation.}. cles. From it we may then gain an insight into the origin of The harmonic motion. which we will address with the Z-transform. First we will show a simple derivation of the Z-transform. We will show that this approach delivers more reliable and relevant answers. d 2 h(t) Then we substitute the second equation for the bracketed term + ω2 h(t) = 0 (3) dt 2 and get after some simple algebraic manipulation the second order . 1 h(n) = H(z)zn−1 dz (2) This author has met many frustrated designers of physics 2␲j z=−∞ algorithms. 2␲j 0 h(n + 1) − h(n) = ωg(n) g(n + 1) − g(n) = −ωh(n) (5) The differential ds is found from the deﬁnition z = e as ds = dz/z.314159. which amounts to h(n + 2) − 2h(n + 1) + h(n) = ω[g(n + 1) − g(n)]. They have either blamed themselves for not More about the practice of Z-transformations is found in the understanding the calculus of discrete mathematics. We will demonstrate 0 the Z-transform approach to create closed-form solutions on two examples in engineering physics and three in mathematical educa. blamed their Appendix. Z-transform deﬁnitions are then g[n + 1] = g[n]−ω*h[n]. The ﬁrst obvious n=0 idea that offers itself is to convert the differential harmonic equa- tution z = eS we can write an equivalent H(s) function but in terms  +∞ tion into its discrete equivalent and then submit it to the computer of z as H(z) = h(n)z−n . which is at the root of many resonance instability. 3 h(n) 2 It is obvious that both methods are subject to low order and linear cases and are therefore applicable only to small separable 1 components of more complex algorithms.76 FAZARINC • Or we solve the difference equation directly by mean of the Z. Z-TRANSFORM DIRECTLY FROM LAPLACE In it h(t) is the function subject to harmonic behavior and ω TRANSFORM is the circular frequency. We evaluation of trigonometric functions in general is desirable and −∞ discretize time t into n integer increments to obtain an equivalent may be found directly from the harmonic motion equation. phenomena in physics is captured in the second order differential The closed form solution for h(n) can be derived from (5).

DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 77 difference equation for h(n) in the generic form h(n + 2) − 2h(n + 1)A + h(n)B = 0 with A = 1 and B = 1 + ω2 (6) The closed-form solution to (6) is found in Equation (A15) of the Appendix as  h(1) − h(0)  h(n) = (1 + ω2 )n/2 h(0)cos(nϕ) + sin(nϕ) ω ϕ = tan−1 ω (7) A comparison with (4) demonstrates a reassuring similarity except for the divergence factor (1 + ω2 )n/2 . h(n + 1) = h(n) + ωg(n) g(n + 1) = g(n) − ωh(n) Substituting A and B from (11) we ﬁnd after some lengthy h(n + 1) = h(n) + ωg(n) g(n + 1) = g(n) − ωh(n + 1) but elementary algebra the following answer to our function h(n + 1) = h(n) + ωg(n + 1) g(n + 1) = g(n) − ωh(n) n/2 h(n + 1) = g(n) + ωg(n + 1) g(n + 1) = g(n) − ωh(n + 1) 1 + abω2 h(n) = There are many other possibilities available as combinations 1 + (1 − a)(1 − b)ω2 of the pairs shown above and we propose a generalized form that . We wish now to address this problem from a generic viewpoint of discrete calculus. Figure 2 Frequency error as a function of digitization interval for various strate this let us write our algorithm (5) in a few possible ways. To demon. The discrete domain is beset with problems originating in the basic question of what happens before something else. This grows with the number of executions n and is obvioulsy responsible for the insta- bility of our algorithm (5). sets of parameters.

easy to see that this factor becomes unity when we set a + b = 1. in turn. which after This.  includes them all h(0)(a − b)ω/2 + g(0) × h(0)cos nϕ +  sin nϕ h(n + 1) = h(n) + ω[ag(n) + (1 − a)g(n + 1)] (8) 1 − (a − b)2 ω2 /4  g(n + 1) = g(n) − ω[bh(n) + (1 − b)h(n + 1)] (9) −1 1 − (a − b)2 (ω/2)2 ϕ = tan ω (13) 1 − (a + b − 2ab)(ω)2 /2 Factors a and b may adopt any values desired. Equation (13) exhibits again the By inspecting it we should then be able to decide what values our divergence term that grows with the number of executions n. It is factors a and b we should adopt to match the true solution (4). . To accomplish this we take the ﬁrst difference of (8). We will now combine these two equations into a second order difference equa. In deriving (13) we have evaluated the term h(1) from (8) tion containing only h(n) and then derive its closed-form solution. produces the new expression simple algebraic operations yields. and (9) by setting n equal to zero.

there are numerous methods that . The case a = b = 0. h(0)(a − 1/2)ω + g(0) h(n + 2) = 2h(n + 1) − h(n) + ω{a[g(n + 1) − g(n)] h(n) = h(0)cos nϕ +  sin nϕ 1 − (a2 − a + 1/4)ω2 + (1 − a)[g(n + 2) − g(n + 1)]} (14) The bracketed expressions are directly available from (9) and when inserted into the above we get. which matches the continuous solu- tion (4) quite well with the exception of the argument ϕ. Of course. If we now set a to 0. again after some straightfor.5 then the above becomes h(n) = ward but messy algebra the expression h(0)cos nϕ + g(0)sin nϕ. This h(n + 2) − 2h(n + 1)A + h(n)B = 0 (10) differs from ω and is as such a source of frequency error with respect to it. In Figure 2 we have plotted the fractional frequency error (ϕ − ω)/ω for the divergent case a = b = 1 and for cases 1 + abω2 − (a + b)ω2 /2 A= and that produce stable. 1 + (1 − a)(1 − b)ω2 With this much knowledge about the effect of our factors a 1 + abω2 and b we can now return to our algorithm (8) and (9) and make B= (11) it stable by selecting the proper sequencing. non-divergent solutions with a + b = 1.5 1 + (1 − a)(1 − b)ω2 matches the continuous solution (4) and would be written as We have encountered this equation earlier and found its solu- g(n) + g(n + 1) tion in (A15) as h(n + 1) = h(n) + ω 2 h(1) − h(0)A h(n) = h(0)Bn/2 cos nϕ + Bn/2 √ sin nϕ h(n) + h(n + 1) B − A2 g(n + 1) = g(n) − ω 2  ϕ = tan −1 B − A2 /A (12) This happens to be an implicit algorithm that the computers do not accept kindly.

ral arguments for acceleration D and velocity v because a number of choices are available. The speciﬁc solution of this equation is then as given by A constant mass m. the algorithm of accelerated the form of (22) when A = B = 1 and f(n) is equal to the RHS of motion based on Newton’s Laws. n++) one encountered in the previous section. we have no reason to prefer any one of them. required for inverse s(n + 1) yielding Z-transformation to orders of less than ﬁve. which calls for elimination of veloc- be used to develop algorithms of prescribed behavior. We will apply it to the most frequently encountered tasks We recognize Equation (A10) in the Appendix as possessing in simulations of dynamics. Here we could also use the acceler- deal with implicit algorithms but we can circumvent them. limits the extraction of poles. n < 30. g[n + 1] = g[n] − ω*h[n + 1]. This time we will use the {h[n + 1] = h[n] + ω*g[n]. yet still ation D(n + 1) and velocity v(n + 1) or their previously evaluated satisfy the stability condition in a very straightforward way by values D(n) and v(n) or we could also use some combinations choosing a = 1 and b = 0.}. that is.78 FAZARINC 1.} Now a expected answer (18) as the target. To this end we choose as yet keen eye of a computer programmer would quickly note that the undeﬁned fractions of the two primary choices with the parameters arrays can be eliminated for this speciﬁc case so that the natural a and b determining their contributions as computer sequencing of statement evaluations can be exploited. The resulting algorithm is then thereof. again employ the parameter optimization procedure. This algorithm with same parame- ters as before produces Figure 3 which meets our expectations for s(n + 1) = s(n) + bv(n) + (1 − b)v(n + 1) (21) extended periods of time [2]. We then simply write the program as for (n = 0. Consequently the tech- nique is also limited to low orders. respectively by equiv- 0 5 10 15 20 25 alent backward differences -0. In the next section we show another + (1 − b)[v(n + 2) − v(n + 1)] example of Z-transform’s power to accomplish this task.5 v(n + 1) = v(n + 1) − v(n) = D -1 n[ns] s(n + 1) = s(n + 1) − s(0) = v (19) -1.5 In (19) we have intentionally avoided speciﬁcations of tempo- Figure 3 Results of correct digitization of the harmonic motion equation.5 An algorithm to simulate accelerated motion on a computer 0 can be elicited from Equations (16) and (17). and a position differential With this the inverse transform is ds(t) = v(t)dt (17) s(n) = s(0) + v(0)n + a(1 − b)nD(0) + (1 − a)(1 − b)nD(1) + [abD(n) + (a + b − 2ab)D(n + 1) A direct integration of (16) and (17) produces for the position of the object +(1 − a)(1 − b)D(n + 2)] ∗ (n − 1) (23) . But there is no known technique s(n + 2) − s(n + 1) = s(n + 1) − s(n) + b[v(n + 1) − v(n)] other than trial and error that can be employed to complex simula- tion algorithm development. g = g − ω*h. n++) v(n + 1) = v(n) + aD(n) + (1 − a)D(n + 1) (20) {h = h + ω*g. Therefore we will This can now be cast into an appropriate computer lan. We do this by taking the ﬁrst difference of algebra though. subjected to force F(t) experiences an (A16) in which the asterisk * denotes the convolution in the n acceleration domain D(t) = F (t)/m (15) s(n) = s(0)(1 − n) + ns(1) + [abD(n) + (a + b − 2ab)D(n + 1) This acceleration generates a velocity differential + (1 − a)(1 − b)D(n + 2)] ∗ (n − 1) dv(t) = D(t)dt (16) The s(1) term can be obtained directly from (20) and (21) for n = 0. The bracketed expressions are easily found from (20) and their substitution produces ACCELERATED MOTION ALGORITHM DERIVED WITH s(n + 2) − 2s(n + 1) + s(n) Z-TRANSFORM = abD(n) + (1 − a)bD(n + 1) + a(1 − b)D(n + 1) In this example we will demonstrate the use of convolution + (1 − a)(1 − b)D(n + 2) (22) to derive algorithm parameters when the driving function is ill deﬁned. General ity terms from (21). s(1) = s(0) + v(0) + a(1 − b)D(0) + (1 − a)(1 − b)D(1). We have demonstrated this earlier before arriving at forms (8) and (9). (22). In order to make a comparison with (18) possible we need a The forgoing example illustrates how the Z-transform may closed form solution for s(n). h(n + 1) − h(n) = ωg(n) and g(n + 1) − g(n) = −ωh(n + 1) Without knowing the outcomes produced by a given choice. n < 30.5 t t s(t) = s(0) + v(0)t + dt D(t)dt (18) 1 h(n) 0 0 0. similar to the guage as we have done before for (n = 0.

such a sequence would con- sion has nothing to do with mathematical practice but only with sist of inﬁnitesimally small steps occurring at inﬁnitesimally small common sense. This type of acceleration can be employed to represent − (1 − a)(1 − b)[(n + 1)D(0) + nD(1) + D(n + 2)] a multitude of cases. Then our inverse transform becomes s(n) = s(0) + v(0)n + a(1 − b)nD(0) + (1 − a)(1 − b)nD(1)  n + [abD(j) + (a + b − 2ab)D(j + 1) j=0 + (1 − a)(1 − b)D(j + 2)](n − 1 − j) (24) It is convenient to use the following easily proven identities 0   n D(j + 1)(n − 1 − j) -1 0 1 2 n j=0  n = [D(j)(n − j)] − nD(0) − D(n + 1) j=0 Figure 4 Stepwise acceleration in discrete domain. When we perform the integrations indicated in (18) we j=0 obtain for the object’s position. For the continuous domain. We can We will drop the terms D(n + 1) and D(n + 2) from the above think of an arbitrary force as a sequence of steps. the expression + (1 − a)(1 − b)(n + 1 − j)] (25)  Without specifying the acceleration function D(j) one cannot n2  s(n) = s(0) + v(0)n + Ds (26) further expand expression (25). So let us now do it for two j=0 kinds of accelerations and see what values the factors a and b have to adopt to force the algorithm (25) into alignment with Newton’s  n = [D(j)(n + 1 − j)] − (n + 1)D(0) − nD(1) − D(n + 2) laws contained in (18). This deci. or n + 2. If we think of it as having started at minus inﬁnity. But we do have the freedom to 2 n≥0 specify any acceleration within reason to ﬁnd the resulting position s(n) of the object in question. we have a constant acceleration case at hand. j=0 Substitution of these into (24) yields Step Acceleration s(n) = s(0) + v(0)n + a(1 − b)nD(0) + (1 − a)(1 − b)nD(1) We deﬁne the step acceleration as Ds (n) = Ds u(n) where u(n) is the unit step function  n  0 if t < 0 + [abD(j)(n − 1 − j) + (a + b − 2ab)D(j)(n − j) u(n) = j=0 1 otherwise + (1 − a)(1 − b)D(j)(n + 1 − j)] It is illustrated in Figure 4 as a continuous function by dashed lines and as its discrete equivalent by dots appearing at unit time − (a + b − 2ab)[nD(0) + D(n + 1)] intervals. another.  n D(j + 2)(n − 1 − j) outcome of a variety of excitations. whenever we wish to examine an j which results in the following deﬁnition of the step acceleration .  n The acceleration is equal to Ds for all times equal to or greater + D(j)[ab(n − 1 − j) + (a + b − 2ab)(n − j) than zero. expressed at discrete times. following one expressions because they are meaningless to our goal. In conformance with its notation we must use the summation index This can be a real time saver. To enable the comparison of results we must limit ourselves s(n) = s(0) + v(0)n + (a − 1)nD(0) − (1 − a)(1 − b)D(0) to discrete times t = n where the information is available to us. Consequently. ﬁnite steps. In the discrete domain. arrived at this universal answer by means of the use of convolution. It is namely quite obvious that the position s(n) at time intervals. which for  ∞ Ds    our case takes on the form f (j) ∗ (n − 1) = f (j)(n − j − 1) j=0 where f(j) stands for the bracketed term in (23). It should be pointed out that we have We will now substitute the same step acceleration into (25). occurring at unit time n cannot possibly depend on accelerations at later times n + 1 intervals would represent such an arbitrary acceleration function. DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 79 We must now produce an explicit form of the discrete con- volution by using Equation (A9) from the Appendix. including zero braic manipulation we end up with and constant acceleration. the results obtained for a single step will be repre- Combine the sums and after an elementary but lengthy alge. sentative of a whole array of acceleration functions.

In the above we have again used the discrete notation for But fast and large ﬂuctuations of acceleration might not be served time. we get s(n) = s(0) + v(0)n − (1 − a)(1 − b)Ds + (a − 1)nDs   (n − 1)n   n(n + 1)  +Ds ab + (a + b − 2ab) 2 2  n(n + 1)  + (1 − a)(1 − b) +n+1 2 After another bout with algebra we end up with this simple Di  answer n2 s(n) = s(0) + v(0)n + Ds + n(0.5 − b) (27) 2 A comparison with (26) suggests the answer b = 0. No restriction on a is placed by (27). for both a and b. When we plug this into (20) and (21) we get the algorithm for computation of 0     accelerated motion that matches Newton’s laws for a step input. impulsive acceleration to (25) the summation term has a value only Such accelerations arise frequently in nature when particles are when the index j is zero. which is deﬁned as it also correctly predicts the inertial motion in absence of external acceleration forces.5 to obtain a match. that is. unity value for Ds and 0. We will return to this question in the next substituted into (18) we get the following answer for the position section where we address the impulsive acceleration.  1 if n = 0 Independence from a is understandable for our particular step ␦ (n) = 0 otherwise function case and extreme values can be used without ill effects. It appears at a given time instant and then vanishes. In it we have colliding. If we apply the same time domain. thermodynamics being one of them. The discrete solution (28) correctly predicts the We denote the impulsive acceleration by Di (n) = Di ␦(n) motion at discrete points in time when a step input is applied.5 natural behaviors. s(n + 1) = s(n) + (28) 2 In Figure 5 we see a plot produced by (28). of the object in discrete notation Impulsive Acceleration s(n) = s(0) + v(0)n + Di n|n≥0 (29) The impulsive acceleration is illustrated in Figure 6 by dashed lines Equation (29) will be used as reference for establishing the for the continuous domain and by discrete points for the discrete parameters a and b in (25) for this case. The simulation of such motion addresses a number of used zero initial values for v and s. −1 0 1 2 v(n + 1) = v(n) + aD(n) + (1 − a)D(n + 1) v(n) + v(n + 1) Figure 6 Impulsive acceleration in discrete domain. But where ␦(n) is the Kronecker delta function. When the corresponding acceleration Di ␦(n) is well by large values of a. Then (25) assumes the following form . t = n.80 FAZARINC for this case 100  0 if j < 0 D(j) = Ds otherwise 80 Consequently our expression (25) becomes s(n) s(n) = s(0) + v(0)n + (a − 1)nDs − (1 − a)(1 − b)Ds 60  n + Ds [ab(n − 1 − j) + (a + b − 2ab)(n − j) j=0 40 + (1 − a)(1 − b)(n + 1 − j)] 20 Applying the following identities  n n(n − 1)  n n(n + 1) n−1−j = n−j = 0 2 2 j=0 j=0 2 4 6 8 10  n n(n + 1) n+1−j = +n+1 n 2 j=0 Figure 5 Simulation of uniformly accelerated motion.

The three graphs tell us impose an initial value of −0. Applying these values to (28).5 in (30). encountered in the Appendix as (A10) when f(n) = 0. the number of periods provide any information at half integer points. which calls for a = 1 and b = 0 but can parameter is the ratio P/I as deﬁned earlier. respectively while allowing the withdrawals of accelerated motion but to demonstrate the technique for analysis to increase at the inﬂation rate i.5 =0 (33) 1.5 sented in the form s(n) m(n + 2) − m(n + 1)(1 + P) + m(n)P + [w(n + 1) − w(n)]P 2. So. a closer match would 1 − (I/P) be desired. invested at the rate p will behave with shown in Figure 8. While agreements are  1 − (I n /P n )  m(n) = P n m(0) − w(0) (35) achieved at discrete points as demanded. the ratio of P to I is 1. The the result of our analysis. and w(n) the withdrawal in the nth year. one twentieth or one fortieth of The object of this analysis was not to computerize the physics the initial principal. The available investment offers an thrust of this paper.5 on s(0).5 The square bracketed expression equals w(n)i according to (32). According to (A12) the poles of Equation (34) are real and easily evaluated as z1 = I. How long will our 3 money last? An alternate question we want the answer to is how s(n) much money is left after n years. To achieve this we must have invested at 1. We on a ﬁnancial example of Z-transform application.053 times ALGORITHM 1. I =1+i Take the ﬁrst difference of (31). Assume that we have at our disposal an amount the money will run out in about 22 years.01. 2 We deﬁne the variable m(n) to be the money amount after n years. time if we start drawing one tenth. It is apparent that a match between (30) and (29) demands With this and after some elementary algebra the ﬁnal solution to a = 1 and b = 0. DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 81 annual appreciation rate of 100 × p% but we predict an inﬂation of 100 × i% and want our income to follow it. In it the continuous Newton solution is shown as a dashed line. From (31) we get also w(n)iP = m(n)iP − m(n + 1)i.04/1.053 as seen from USE OF Z-TRANSFORM TO DEVELOP A FINANCIAL the graph. We will therefore leave this section and take and choose the initial principal m(0) to be one million dollars. whose only pleasure are numerous (A13) as cancellations of terms.03 or about 8.5 m(n + 2) − m(n + 1)(I + P) + m(n)IP = 0 (34) 2 n 0 1 This is recognized as a second order difference equation Figure 8 Improved match to expected motion by initial condition choice. With these quantities we can deﬁne the following relationships: 1 Money after nth withdrawal m(n) − w(n) 0 Money after appreciation m(n + 1) = [m(n) − w(n)]P (31) 1 2 n Inﬂated withdrawal w(n + 1) = w(n)I (32) Figure 7 Discrete motion following a stepwise acceleration compared to theoretical expectation. with zero initial our problem is conditions and Di = 1. With this we can write (33) as 0. Unfortunately the discrete algorithm (28) does not is plotted in Figures 9–11 as function of n. While interesting any of money m(0) and know that we need an annual amount of w(0) further discussion of this example would distract from the main to live on at today’s prices. which is m(n + 2) − m(n + 1) = m(n + 1)P − m(n)P − w(n + 1)P + w(n)P and can be pre- 3. If on the other hand we can get only 4% interest on our investment. The solution is given in after some elementary algebra. A shift of the graph to the left by 0.5%. divided by the inﬂated withdrawal from (32) b = 0. The shorthand used in the above : P = 1 + p.000 dollars the ﬁrst year but then withdraw every year 3% more to allow for the inﬂation. the required ratio P/I is 1. Take an example from Figure 10 of relevant algorithms. The lower This time we want to derive an algorithm that will address an curve in the middle graph which applies to this case tells us that investment issue. results in Figure 7. which produces the plot how the initial principal m(n). A = (I + P)/2 and B = IP. we must retain to which the appreciation rate p and the inﬂation rate i apply. If we want to preserve the initial capital forever. I n+1 − P n+1  I +P  In − P n m(n) = m(0) + m(1) − 2m(0) s(n) = s(0) + v(0)n + Di a(n − b) (30) I −P 2 I −P m(1) is obtained from (31) by setting n to zero as m(0)P − w(0)P. z2 = P.03 = 1. start drawing 50. .5 would certainly produce a desirable pattern and would be achieved by choosing Expression (35).

z) where the respective z domain variables are u for s and z for n. p)uP(u. n) + (1 − inﬂation rate I for an initial ratio m/w = 40. But the facts are different as we will prove by an analysis. n) denoted by P(u. The probability of this happening is p(s. p(s − 1. n − 1) times the probability of success in one trial. which from (36) amounts to p(1. • Or we had s successes in the previous n − 1 trials. Advance both s and n by one increment and obtain p(s + 1. n)]] = P(u. p(1. n) must be (1 − p)n . that is. 1) = p (36) Probability of a failure in a single trial p(0. [3]. that is. followed by a failure in the nth trial. We can also say that the failure is a certainty with no trials. The two probabilities add up to yield p(s. Consequently the term up(0. To this end inﬂation rate I for an initial ratio m/w = 20. n) on the right. n) − (1 − p)up(0. 0) = 0. that is. which means that we can set p(1. 0) = p(2. 1) = 1 − p (37) Figure 9 Ratio of investment status m to inﬂated annual withdrawal w as Probability of s successes in n trials p(s. An example of such in physics is found in Ref. n)] = P(u. n) at a game for which the probability of a success in a single trial. p(s. followed by a success in the nth trial. Then Probability of a success in a single trial p(1. n) = p(s − 1. n) × (1 − p) (39) Figure 10 Ratio of investment status m to inﬂated annual withdrawal w This is a partial difference equation in variables s and n. for an initial ratio m/w = 10. which will at the same time allow us to demonstrate the use of a two dimensional Z-transform. n + 1) = pP(u. 1) is known. z) (40) The last expression is the doubly transformed p(s. n) + (1 − p)uP(u. n − 1) × p + p(s. n)] = Zn→z [Zs→u [p(s. n) × p + p(s + 1. n + 1) = p(s. The prob- ability of this happening is p(s − 1. n − 1) times the probability of failure in a single trial given by (37) as p(0. 0) = p(3. p(0. n + 1) on the left cancels the term Figure 11 Ratio of investment status m to inﬂated annual withdrawal w (1 − p) up(0. p(s. Let us now apply (A4) to do the ﬁrst transformation of (39) from s into the u domain. Here we will attempt to ﬁnd the probability of the number of successes s in n trials. n) The quantity p(0. n + 1) = pP(u. n) is the probability of zero successes in n trials. Let us designate this probability as being equal to p. we will introduce some needed z-domain variables Zs→u [p(s. n + 1) − up(0. n) Zn→z [P(u. We transform this now with respect to n using the notation set down in (40) and by the method suggested in (A4) . It as a function of time in years n for different ratios of investment P and can be solved by a two-dimensional Z-transformation. n − 1) × (1 − p). n − 1). 0) = 1. n) to be found (38) a function of time in years n for different ratios of investment P and inﬂation rate I. n − 1). We can say with certainty that there can be no successes without trials. The simpliﬁed equation after as a function of time in years n for different ratios of investment P and the ﬁrst transformation is now uP(u. 0) = · · · = p(s. 1) = p. Because the probability of one failure in one trial is (1 − p) we immediately conclude that p(0. 1) = 1 − p. This yields uP(u. Finally we can state that s successes in n trials can be achieved in two possible ways: • Either we had s − 1 successes in previous n − 1 trials.82 FAZARINC APPLICATION OF Z-TRANSFORM TO GAMBLING STATISTICS Plain intuition would tell a gambler that the odds of winning are steadily improving with the number of attempts at the same game. n).

is this to the power of n.3 the curve s = 1 has a ps d s zn value of 0. Note n! p(s. z) = p(s.0 01 56 10 11 15 16 20 21 25 26 30 asterisk subscripted to identify the domain. The best chances of dzs (n − s)! success. −1 1 d The case s = 0 is truncated at the top but it starts out at p(s. . n) = ps (1 − p)n−s (42) that we are faced with a single but (s + 1)-st order pole at z = 1 + p. n) = p(s. that is. 0) ∗s (1 − p)n−s s! (n − s)! long time. it is certain that there will be zero successes with zero trials. After substitution of the pole location for u we get Consequently the above summation contributes zero for all ps z values of j except when j = s. z) − zuP(u.3 at n = 1. n) = p(s − j. 0) ∗s Zu→s u − p/(z − 1 + p) one success in one trial is 30%. Invoking (A9) seems appropriate but the as one can read from our ﬁrst graph. Substituting in the above and inserting the z value at the pole we get The most important lesson for gamblers to learn from these ps n! graphs is the fact that it does not pay to bet on the same game for a p(s.4 1 z uP(u.1 inverse transforms of these two factors must then be convolved in the s domain and we indicate the intended operation with an 0. 0)(zu/z − 1 + p) p = 0.3. z) is 0.2 With this aid we can write for the probability of s successes in n trials 0. From the same graph we confusion in many variables involved calls for some handholding.15 where the two functions to be convolved are 2 0.0 j! (n − s)! 0 10 20 30 40 50 j=0 n The summation extends over all values of j. 0) ∗s for the probability of s successes in n trials (z − 1 + p)s+1 The next step is now to invert the above with respect to z. When also combined with (A8) we get Expression (42) is plotted in Figure 12 for the case p = 0.5 s=0 f1 (s) ∗s f2 (s) = f1 (s − j)f2 (j) 0. 0) P(u. The 0. z) = p(s. p(s.85 in this = n(n − 1)(n − 2) · · · (n − s + 1)zn−s = zn−s case as compared to 0. The curve s = 0 drops off slower than in the d s zn n! previous case because the probability of a failure is 0. But we have Figure 13 Probability of s successes in n trials when the probability of concluded at the very outset that p(s. z). 0) ∗s us−1 p z−1+p u = z−1+p except zero. First 4 5 we do it with respect to u into the s domain. the probability of n failures last transformation because they are constants in the z − n domain. As one would expect for p = 0. We have a single pole at 0. 0) and f2 (s) = (1 − p)n−s 4 s! (n − s)! 5 0. z)(z − 1 + p)s+1 zn−1 ]z=1−p s! dzs The s = 0 case requires a failure at each of the n trials.7 in the preceding case. on the other hand are somewhat lower for obvious reasons.1  ∞ pj n! p(s. The curve s = 0 exhibits this exponential Consequently we can write behavior. n  zu/(z − 1 + p)  −1 Figure 12 Probability of s successes in n trials when the probability of P(s.15.3 P(u. 0) is zero for all values of s one success in one trial p is 15%. 1). z) + (1 − p)uP(u.4 1 j=0 p = 0. 0) ∗s | For other probabilities of a single success p we can construct a s! dzs z=1−p whole family of graphs. which is easily evaluated when (A7) is invoked. But the numerator contains a product of two functions of u. DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 83 zuP(u.3 ps n! 3 f1 (s) = p(s. zu | = p(s. n) = p(s. We show in Figure 13 only one additional The sth order derivative of Zn is easily found to be case for p = 0.2 u = p/(z − 1 + p) which makes the transformation quite simple. For example. Because The convolution and all functions of s are not affected by this the probability of a failure is (1 − p). n) = 1. p(s.15.3 3 We must now perform a double inverse transformation. The same is true after only three trials if p = 0. P(u. z) = = (41) 2 z u − p − (1 − p)u u − (p/z − 1 + p) 0. Transcribe (A9) for our case as follows  ∞ 0. the chances of winning are rapidly diminishing The only task left now is to convolve the two functions as beyond that point. s!(n − s)! There is also just a single residuum. if one has not won once after some 8 trials when p = 0.5 easily extracted from this s=0 0. This leads instantly to the ﬁnal answer P(s. s Recall that p is the probability of one success in one trial or p(1. n) = Zz→n [P(s. 0) (1 − p)n−j 0. z)] = [P(s. 0) = pP(u.3 indicated by the asterisk.