You are on page 1of 7

3/29/2014

A Neighborhood of Infinity: On Removing Singularities from Rational Functions


Share

0 More

NextBlog

dpiponi@gmail.com NewPost Design SignOut

ANeighborhoodofInfinity
Saturday,September18,2010

OnRemovingSingularitiesfromRational Functions
Introduction Suppose we have the function
>fx=1 / ( x + x ^ 2 )-1 / ( x + 2 * x ^ 2 )

Some basic algebraic manipulation shows that in the limit as x0, f(x)1. But we can't simply compute f 0because this computation involves division by zero at intermediate stages. How can we automate the process of computing the limit without implementing symbolic algebra? I've already described one way to remove singularities from a function. But that approach is very limited in its applicability. This article is about a variation on the approach to formal power series that nicely showcases some advantages of lazy lists. It will allow us to form Laurent series of functions so we can keep track of the singularities. The usual Haskell approach to power series allows you to examine the coefficients of any term in the power series of the functions you can form. These series can't be used, however, to evaluate the function. Doing so requires summing an infinite series, but we can't do so reliably because no matter how many terms in a power series we add, we can never be sure that there aren't more large terms further downstream that we haven't reached yet. And if we want to perform computations completely over the rationals, say, we don't want to be dealing with infinite sums. I'd like to look at a way of working with power series that allows us to perform exact computations making it possible to answer questions like "what is the sum of all the terms in this power series starting with the x^n term?" By extending to Laurent series, and implementing the ability to selectively sum over just the terms with non-negative powers, we can compute functions like fabove at 0 and simply skip over the troublesome http://en.wikipedia.org/wiki/Pole_%28complex_analysis%29. PowerSeries When I previously discussed power series I used code that worked with the
http://blog.sigfpe.com/2010/09/on-removing-singularities-from-rational.html 1/7

3/29/2014

A Neighborhood of Infinity: On Removing Singularities from Rational Functions

coefficients of the power series. This time we want to work with values of the function so it makes sense to store, not the coefficients ai but the terms themselves, aixi. So instead of a list of coefficients, N u ma= >[ a ]we need a representation that looks a little like:
>d a t aP o w e ra=P o w e r( a>[ a ] )

where we pass x in as an argument to the function contained in a P o w e r . But we also want to allow Laurent series so we need to also store an offset to say which (possibly negative) term our series starts with:
>d a t aL a u r e n ta=L a u r e n t( a>( I n t ,[ a ] ) )

But this fails us for at least two reasons: 1. We have the individual terms, but to evaluate the function requires summing all of the terms in an infinite list. 2. If we have a Laurent series, then we need to store values of aixi for x=0 and i<0. We'll end up with division by zero errors. PartialSumSeries So here's what we'll do instead. Suppose our power series is i=naixi. We'll store the terms sj=i=jaixi-j. Our type will look like:
>d a t aP a r t i a la=P a r t i a l( a>( I n t ,[ a ] ) ) >i n s t a n c eE q( P a r t i a la ) >i n s t a n c eS h o w( P a r t i a la )

It's straightforward to add two functions in this form. We just add them term by term after first aligning them so that the xi term in one is lined up with the xi term in the other:
>i n s t a n c eN u ma= >N u m( P a r t i a la )w h e r e > P a r t i a lf+P a r t i a lg=P a r t i a l$\ x> > l e t( m ,x s )=fx > ( n ,y s )=gx > p a d0_y s=y s > p a dnxy s=l e tz : z s=p a d( n 1 )xy s > i nx * z:z:z s > l=m i nmn > i n( l ,z i p W i t h( + )( p a d( m l )xx s )( p a d( n l )xy s ) )

Notice the slight subtlety in the alignment routine p a d . By the definition j above, the jth term has a factor of x built into it. So we need to multiply by x each time we pad our list on the left.
http://blog.sigfpe.com/2010/09/on-removing-singularities-from-rational.html 2/7

3/29/2014

A Neighborhood of Infinity: On Removing Singularities from Rational Functions

Now we need to multiply series. We know from ordinary power series that we need some sort of convolution. But it looks like for this case we have an extra complication. We appear to need to difference our representation to get back the original terms, convolve, and then resum. Amazingly, we don't need to do this at all. We can convolve 'in place' so to speak. Here's what an ordinary convolution looks like when we want to multiply the sequence of terms (ai) by (bi):

In this example, the blue diagonal corresponds to the terms that are summed to get the 4th term in the result. However, we wish to work with partial sums sj=i=jaixi-j and tj=i=jtixi-j, constructing the partial sums of the convolution of a and b from s and t. The partial sums of the convolution can be derived from the partial sums by tweaking the convolution so it looks like this:

The blue terms work just like before and need to be summed. But we also need to subtract off the red terms, weighted by a factor of x. That's it! (I'll leave that as an exercise to prove. The inclusion-exclusion principle helps.) The neat thing is that the red terms for each sum are a subset of the blue terms needed for the next element. We don't need to perform two separate sums. We can share much of the computation between the red and blue terms. All we need to do is write an ordinary convolution routine that additionally returns not just the blue terms, but a pair containing the blue sum and the red sum.
> P a r t i a lf*P a r t i a lg=P a r t i a l$\ x> > l e t( m ,x s )=fx > ( n ,y s )=gx > ( o u t e r ,i n n e r )=c o n v o l v ex sy s
http://blog.sigfpe.com/2010/09/on-removing-singularities-from-rational.html 3/7

3/29/2014

A Neighborhood of Infinity: On Removing Singularities from Rational Functions

> f 'ab=a x * b-( t h es u b t r a c t i o nIm e n t i o n e da b o v e ) > i n( m + n ,z i p W i t hf 'o u t e ri n n e r ) > f r o m I n t e g e rn=l e tn '=f r o m I n t e g e rni nP a r t i a l$\ _>( 0 ,n ':r e p e a t0 ) > n e g a t e( P a r t i a lf )=P a r t i a l$\ x>l e t( m ,x s )=fx > i n( m ,m a pn e g a t ex s ) > s i g n u m=e r r o r" s i g n u mn o ti m p l e m e n t e d " > a b s =e r r o r" s i g n u mn o ti m p l e m e n t e d "

This is an ordinary convolution routine tweaked to return the partial sum i n n e r .


>c o n v o l v e( a 0 : a r @ ( a 1 : a s ) )~ ( b 0 : b r @ ( b 1 : b s ) )= > l e t( i n n e r ,_ )=c o n v o l v ea rb r > a b=m a p( a 0* )b s > b a=m a p( *b 0 )a s > i n( a 0 * b 0:a 0 * b 1 + a 1 * b 0 > :z i p W i t h 3( \ abc>a + b + c )i n n e ra bb a ,0:i n n e r )

The code is very similar to the usual power series multiplication routine. We can also use the same method described by McIlroy to divide our series. As our series are a munged up version of the usual power series it's pretty surprising that it's possible to divide with so little code:
>i n s t a n c eF r a c t i o n a la= >F r a c t i o n a l( P a r t i a la )w h e r e > f r o m R a t i o n a ln=l e tn '=f r o m R a t i o n a lni nP a r t i a l$\ _>( 0 ,n ':r e p e a t0 ) > r e c i p( P a r t i a lf )=P a r t i a l$\ x> > l e tn i b b l e( n ,x : x s )|x = = 0 =n i b b l e( n + 1 ,x s ) > |o t h e r w i s e=( n ,( x : x s ) ) > ( n ,x s )=n i b b l e( fx ) > i n( n ,r c o n v o l v exx s )

In effect, r c o n v o l v esolves the equation c o n v o l v eab = = 1 :


>r c o n v o l v ex( a 0 : a r @ ( a 1 : a s ) )= > l e t( o u t e r ,i n n e r )=c o n v o l v ea rr e s u l t > fab=x * b a > r=1 / fa 0a 1 > r e s u l t=r e c i pa 0:( m a p( r* )$z i p W i t hfo u t e ri n n e r ) > i nr e s u l t

Note one ugly quirk of this code. I need to 'nibble' off leading zeroes from the series. This requires our underlying type ato have computable equality. (In principle we can work around this using parallel or). That's it. We can now write a function to compute the positive part of a rational function. (By positive part, I mean all of the terms using nonnegative powers of x.)
>p o sfz=l e tP a r t i a lg=f$P a r t i a l$\ x>( 1 ,1:r e p e a t0 ) > ( n ,x s )=gz
http://blog.sigfpe.com/2010/09/on-removing-singularities-from-rational.html 4/7

3/29/2014

A Neighborhood of Infinity: On Removing Singularities from Rational Functions

> > >

i ni fn > 0 t h e nz ^ n * h e a dx s e l s ex s ! ! ( n )

Here are some examples:


>t e s t 1=l e tfx=( 1 + 2 * x ) / ( 3 4 * x * x ) > i np o s( \ x>1 / ( fx f0 ) / x )( 0 : : R a t i o n a l ) >t e s t 2=p o s( \ x>1 / ( 1 + 4 * x + 3 * x ^ 2 + x ^ 3 )-1 / ( 1 + x ) )( 1 : : R a t i o n a l )

The original example I started with:


>t e s t 3=p o s( \ x>1 / ( x + x ^ 2 )-1 / ( x + 2 * x ^ 2 ) )( 0 : : R a t i o n a l )

No division by zero anywhere! Conclusions The code works. But it does have limitations. As written it only supports rational functions. It's not hard to extend to square roots. (Try writing the code - it makes a nice exercise.) Unfortunately, any implementation of square root will (I think) require a division by x. This means that you'll be able to compute the positive part away from zero, but not at zero. This method can't be extended fully to transcendental functions. But it is possible to add partial support for them. In fact, So with a little work we can still compute the positive part of functions like 1 / s q r t ( c o sx 1 )away from x==0. But applying c o sto an arbitrary rational function may need more complex methods. I encourage you to experiment. Note that this code makes good use of laziness. If your function has no singularities then you might find it performs no computations beyond what is required to compute the ordinary numerical value. Posted by Dan Piponi at Saturday, September 18, 2010

2comments: AndyElveysaid... Hi Dan! Wow, this looks like high-powered stuff! Anyway, sorry to go off-topic here, but I'm just asking about SASL (I'd emailed you but hadn't had a reply). I've just come across your SASL compiler (which is great!). I'm thinking of doing a small public-domain interpreter or compiler myself, and so I might use some of the SASL code. So, just wanting to ask - is the SASL code basically P.D.? ( In other words, "use as you like, but attribution is appreciated"?) I
http://blog.sigfpe.com/2010/09/on-removing-singularities-from-rational.html 5/7

3/29/2014

A Neighborhood of Infinity: On Removing Singularities from Rational Functions

will certainly acknowledge you if I do use any of the code! Many thanks for your time - looking forward to hearing from you! Bye for now - Andy Elvey Friday, 24 September, 2010 RobinHoodesaid... I think I may have spotted a typo, although pretty minor. In the line: "However, we wish to work with partial sums sj=i=jaixi-j and tj=i=jtixi-j, constructing the partial sums of the convolution of a and b from s and t." Did you mean to use b_i on the RHS, not t_i? It's obvious what you mean here, just thought I'd point that out. Saturday, 23 October, 2010 Post a Comment Linkstothispost Create a Link Newer Post Subscribe to: Post Comments (Atom) Home Older Post

BlogArchive
2014 (1) 2013 (4) 2012 (8) 2011 (13) 2010 (20) December (2) November (2) September (2) On Removing Singularities from Rational Functions Automatic even/odd splitting August (2) July (2) May (2) April (1)
http://blog.sigfpe.com/2010/09/on-removing-singularities-from-rational.html 6/7

3/29/2014

A Neighborhood of Infinity: On Removing Singularities from Rational Functions

March (2) February (2) January (3) 2009 (21) 2008 (35) 2007 (37) 2006 (92) 2005 (53)

SomeLinks
The Comonad.Reader Rubrication Richard Borcherds: Mathematics and physics The n-Category Cafe Ars Mathematica

AboutMe
Dan Piponi
Follow

1.7k

Blog: A Neighborhood of Infinity Code: Github Twitter: sigfpe Home page: www.sigfpe.com View my complete profile

http://blog.sigfpe.com/2010/09/on-removing-singularities-from-rational.html

7/7

You might also like