Professional Documents
Culture Documents
Reading Problems
Computer Algebra Systems were originally conceived in the early 1970’s by researchers work-
ing in the area of artificial intelligence. The first popular systems were Reduce, Derive
and Macsyma. Commercial versions of these programs are still available. The two most
commercially successful CAS programs are Maple and Mathematica. Both programs
have a rich set of routines for performing a wide range of problems found in engineering
applications associated with research and teaching. Several other software packages, such as
MathCAD and MATLAB include a Maple kernal for performing symbolic-based calcu-
lation. In addition to these popular commercial CAS tools, a host of other less popular or
more focused software tools are availalbe, including Axiom and MuPAD.
The following is a brief overview of the origins of several of these popular CAS tools along
with a summary of capabilities and availability.
Axiom
Axiom is a powerful computer algebra package that was originally developed as a research
tool by Richard Jenks’ Computer Mathematics Group at the IBM Research Laboratory in
New York. The project began in 1978 under the name Scratchpad and the first release of
Axiom was launched in 1991. The research group is widely recognized for some outstanding
achievements in the field of computer algebra and their work was augmented by contributions
from a multitude of experts around the world. This collaboration continues to play an
important part in the development of Axiom.
1
The object oriented design of Axiom is unique in its recognition of a ‘universe’ of mathemat-
ical components and the relationships amongst them. The strong typing and hierarchical
structure of the system leads to it being both consistent and robust. Many users are wary
of strongly typed languages but Axiom employs comprehensive type-inferencing techniques
to overcome problems encountered in some older packages. It is a very open system with all
library source code held on-line. This means that all the algorithms employed and the type-
inferencing performed on behalf of the user are readily visible - making Axiom particularly
attractive for teaching purposes.
The package includes an interface that allows users to develop their own object oriented
programming code and add it in compiled form to the Axiom library which leads to significant
performance improvements. Axiom is the only computer algebra package providing such a
facility. Code can also be ‘lifted out’ of the Axiom system to form stand-alone programs
which are independent of the Axiom environment.
Axiom was available commercially for several years through NAG (Numerical Algorithm
Group) but their support of the product terminated in December 2001. It is now distributed
as an open license through several ftp sites worldwide.
Derive
Derive had its roots in the Mu-Math program first released in 1979, with the first available
version appearing in 1988. Its main competitive advantage to programs such as Maple
and Mathematica was its limited use of computer resourses including RAM and hard disk
space. While Derive does not have the same power and breadth of features as Maple or
Mathematica, it is a more suitable package for the casual or occasional user and probably
for many professional users whose needs are for standard rather than esoteric features.
Derive is the ideal program for demonstration, research, and mathematical exploration be-
cause of its limited use of computer resources. Users appreciate the easy to use, menu-driven
interface. Just enter a formula, using standard mathematical operators and functions. De-
rive will display it in an easy-to-read format using raised exponents and built up fractions.
Select from the menu to simplify, plot, expand, factor, place over a common denominator,
integrate, or differentiate. Derive intelligently applies the rules of algebra, trigonometry,
calculus, and matrix algebra to solve a wide range of mathematical problems. This non-
numeric approach goes far beyond the capabilities of mere statistics packages and equation
solvers that use only approximate numerical techniques. Powerful capabilities exist for 2D
(Cartesian, polar, and parametric) and 3D graphing.
The current version of Derive for PC’s is Derive 5, available from Texas Instruments for
$199US or $99US for an Educational version. System Requirements include, Windows 95,
98, ME, NT, 2000 or XP compatible PC (minimum RAM and processor requirements are
2
the same as the operating system requirements), CD ROM Drive, and less than 4 MB of
disk space.
Macsyma
Macsyma evolved from research projects funded by the U.S. Defense Advanced Research
Projects Agency at the Massachusetts Institute of Technology around 1968. By the early
1970’s, Macsyma was being widely used for symbolic computation in research projects at
M.I.T and the National Labs. In the late 1970’s the U.S. government drastically reduced
the funding for symbolic mathematics software development. Authorities reasoned then that
faster vectorized supercomputers and better numerical mathematical software could solve all
U.S. mathematical analysis needs. Around 1980 M.I.T. began seeking a commercial licensee
for Macsyma and in 1982, licensed Macsyma to Symbolics, Inc., an early workstation spin-off
from M.I.T.
In April of 1992, Macsyma Inc. acquired the Macsyma software business from Symbolics,
Inc. Bolstered by private investors from across the U.S. who understand the potential of the
software, Macsyma Inc. has been reinvesting in Macsyma to make the software a suitable
tool for a wide range of users and has already brought the PC user interface and scientific
graphics up to modern windows standards.
Macsyma’s strength lies in the areas of basic algebra and calculus, O.D.E.s, symbolic-
numerical linear algebra and, when combined with PDEase, in symbolic and numerical
treatment of P.D.E.s. The current version of PC Macsyma 2.4 is available for MS-Windows.
Maple
The MAPLE project was conceived in November 1980 with the primary goal to design
a computer algebra system which would be accessible to large numbers of researchers in
mathematics, engineering, and science, and to large numbers of students for educational
3
purposes. One of the key ideas was to make space efficiency, as well as time efficiency, a fun-
damental criterion. The vehicle for achieving this goal was to use a systems implementation
language from the BCPL family rather than LISP. Another aspect of making the system
widely accessible was to design for portability, so that the system could be ported to the
various microcomputers which were appearing in the marketplace. A very important aspect
of achieving the efficiency goal was to carry out research into the design of algorithms for
the various mathematical operations.
Maple is a comprehensive general purpose computer algebra system that can do both sym-
bolic and numerical calculations and has facilities for 2 and 3-dimensional graphical output.
Maple is also a programming language. In fact almost all of the mathematical and graphi-
cal facilities are written in Maple and not in a systems implementation language like other
computer algebra systems. These Maple programs reside on disk in the Maple library and
are loaded on demand. The programming language supports procedural and functional pro-
gramming. Because of the clean separation of the user interface from the kernel and library,
Maple has been incorporated into other software packages, such as Mathcad and MatLAB,
to allow the symbolic functionality of the program to be accessable to as wide and audience
as possible.
At the University of Waterloo, Maple 8 is available to all students through the Nexus system
or through the University dial up system using an X-Windows package.
(see http://ist.uwaterloo.ca/cs/chip/gs/newgs.html for further details)
Mathematica
Mathematica is a product of Wolfram Research Inc. founded by the ‘architect’ of the system,
Stephen Wolfram. Stephen Wolfram was a MacArthur Prize recipient in 1981. During this
period (in the early 1980’s) Stephen Wolfram developed a language called SMP (Symbolic
Manipulation Processor) in C. This evolved into another program, Mathematica.
Mathematica, like Maple, offers capabilities for symbolic and numerical computations. Nu-
meric computations can be carried out to ‘arbitrary’ precision, though obviously the higher
the precision, the more time required to complete the calculation. There is a full suite of
functions supporting 2- and 3-dimensional plotting of data and functions. Mathematica in-
corporates a graphics language capability which can be used to produce visualisations of
complex objects.
4
Web Site: http://www.wolfram.com/
MuPAD
MuPAD is a general purpose (parallel) computer algebra system, developed at the University
of Paderborn in Germany. MuPAD is available via FTP for several operating systems. The
net version is limited in its memory access and cannot be used to solve real hard problems.
But all non-commercial users can get the full-version for free by obtaining a MuPAD license
(key) that unlocks all memory.
Reduce
The first version of REDUCE was developed and published by Anthony C. Hearn more
than 25 years ago. The starting point was a class of formal computations for problems
in high energy physics (Feynman diagrams, cross sections etc.), which are hard and time
consuming if done by hand. Although the facilities of the current REDUCE are much more
advanced than those of the early versions, the direction towards big formal computations
in applied mathematics, physics and engineering has been stable over the years, but with a
much broader set of applications.
Like symbolic computation in general, REDUCE has profited by the increasing power of
computer architectures and by the information exchange made available by recent network
developments. Spearheaded by A.C. Hearn, several groups in different countries take part
in the REDUCE development, and the contributions of users have significantly widened the
application field.
Today REDUCE can be used with a variety of hardware platforms from the Windows-based
personal computer up to the Cray supercomputer. However, the primary vehicle is the class
of advanced UNIX workstations.
REDUCE is based on a dialect of Lisp called “Standard Lisp”, and the differences between
versions are the result of different implementations of this Lisp; in each case the source code
for REDUCE itself remains the same. The complete source code for REDUCE is available
through ftp sites worldwide.
5
Web Site: http://www.uni-koeln.de/REDUCE/ or http://www.zib.de/Symbolik/reduce/
Others
Mathcad is an easy to use tool for basic matematical calculations that has a Maple engine
for doing symbolic computation. It is fully WYSYWIG.
MatLab is a good number crucher. It also has a Maple engine for doing Symbolic operations.
A Student version is available.
GNU-calc runs inside GNU Emacs and is written entirely in Emac Lisp. It does the usual
things: arbitrary precision integer, real, and complex arithmetic (all written in Lisp), sci-
entific functions, symbolic algebra and calculus, matrices, graphics, etc. and can display
expressions with square root signs and integrals by drawing them on the screen with ASCII
characters. It comes with a well written 600 page on-line manual. You can FTP it from any
GNU site.
6
Glossary of Symbols
Symbol Quantity
Bn Bernoulli number
H(1) (2)
n (x), Hn (x) Hankel functions (or Bessel functions) of the third kind
7
Symbol Quantity
h(1) (2)
n (x), hn (x) Spherical Hankel functions
Pm
n (x) Associated Legendre function of the first kind
Qm
n (x) Associated Legendre function of the second kind
q Jacobi nome
8
Symbol Quantity
Greek Characters
γ Euler-Mascheroni constant
λ
(n) Binomial coefficient
ψ(x) Psi function (or the logarithmic derivative of the Gamma function)
9
Selected References
1. Abramowitz, M. and Stegun, I.A., Handbook of Mathematical Functions, Dover,
New York, 1965.
2. Artin, E., The Gamma Function, Holt, Rinehart and Winston, New York, 1964.
3. Erdelyi, A., Magnus, W., Oberhettinger, F. and Tricomi, F.G., Higher Tran-
scendental Functions, Bateman Manuscript Project, Vols. 1-3, McGraw-Hill, New
York, 1953.
5. Hobson, E.W., The Theory of Spherical and Ellipsoidal Harmonics, Cambridge Uni-
versity Press, London, 1931.
6. Hochsadt, H., Special Functions of Mathematical Physics, Holt, Rinehart and Win-
ston, New York, 1961.
7. Jahnke, E., Emdw, F. and Losch, F., Tables of Higher Functions, 6th Edition,
McGraw-Hill, New York, 1960.
10. MacRobert, T.M., Spherical Harmonics, 2nd Edition, Mathuen, London, 1947.
11. Magnus, W., Oberhettinger, F. and Soni, R.P., Formulas and Theorems for the
Functions of Mathematical Physics, 3rd Edition, Springer-Verlag, New York, 1966.
12. McLachlan, N.W., Bessel Functions for Engineers, 2nd Edition, Oxford University
Press, London, 1955.
15. Slater, L.J., Confluent Hypergeometric Functions, cambridge University Press, Lon-
don, 1960.
16. Sneddon, I.N., Special Functions of Mathematical Physics and Chemistry, 2nd Edi-
tion, Oliver and Boyd, Edinburgh, 1961.
17. Watson, G.N., A Treatise on the Theory of Bessel Functions, 2nd Edition, Cambridge
University Press, London, 1931.
10
18. National Bureau of Standards, Applied Mathematics Series 41, Tables of Error
Function and Its Derivatives, 2nd Edition, Washington, DC, US Government Printing
Office, 1954.
19. Franklin P., A Treatise on Advance Calculus, Chapter 16, Dover Publications, New
York, 1940.
20. Hancock H., Elliptic Integralsd, Dover Publications, New York, 1917, p69 and p81.
21. Tranter, C.J., Integral Transforms in Mathematical Physics, 2nd Edition, Methuen,
London, 1956, pp 67-72.
22. Stroud, A.H. and Secrest, D., Gaussian Quadrature Formulas, Prentice-Hall Inc.,
Englewood Cliffs, NJ, 1966.
23. Bauer, F.L., Rutishauser, H. and Stiefel, E., “New Aspects in Numerical Quadra-
ture,” Symposia in Applied Mathematics, 15, Providence, RI, American Mathematical
Society, 1963, pp. 199-218.
24. Ralston, A., A First Course in Numerical Analysis, McGraw-Hill, New York, 1965.
25. Gerald, C.F., Applied Numerical Analysis, 2nd Edition, Addison-Wesley, Reading,
Mass., 1978.
11
Factorial, Gamma and Beta Functions
Reading Problems
Outline
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Factorial function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Gamma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Digamma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Incomplete Gamma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Beta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Incomplete Beta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Assigned Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1
Background
Louis Franois Antoine Arbogast (1759 - 1803) a French mathematician, is generally credited
with being the first to introduce the concept of the factorial as a product of a fixed number
of terms in arithmetic progression. In an effort to generalize the factorial function to non-
integer values, the Gamma function was later presented in its traditional integral form by
Swiss mathematician Leonhard Euler (1707-1783). In fact, the integral form of the Gamma
function is referred to as the second Eulerian integral. Later, because of its great importance,
it was studied by other eminent mathematicians like Adrien-Marie Legendre (1752-1833),
Carl Friedrich Gauss (1777-1855), Cristoph Gudermann (1798-1852), Joseph Liouville (1809-
1882), Karl Weierstrass (1815-1897), Charles Hermite (1822 - 1901), as well as many others.1
The first reported use of the gamma symbol for this function was by Legendre in 1839.2
The first Eulerian integral was introduced by Euler and is typically referred to by its more
common name, the Beta function. The use of the Beta symbol for this function was first
used in 1839 by Jacques P.M. Binet (1786 - 1856).
At the same time as Legendre and Gauss, Cristian Kramp (1760 - 1826) worked on the
generalized factorial function as it applied to non-integers. His work on factorials was in-
dependent to that of Stirling, although Sterling often receives credit for this effort. He did
achieve one “first” in that he was the first to use the notation n! although he seems not to
be remembered today for this widely used mathematical notation3 .
A complete historical perspective of the Gamma function is given in the work of Godefroy4
as well as other associated authors given in the references at the end of this chapter.
1
http://numbers.computation.free.fr/Constants/Miscellaneous/gammaFunction.html
2
Cajori, Vol.2, p. 271
3
Elements d’arithmtique universelle , 1808
4
M. Godefroy, La fonction Gamma; Theorie, Histoire, Bibliographie, Gauthier-Villars, Paris (1901)
2
Definitions
1. Factorial
2. Gamma
also known as: generalized factorial, Euler’s second integral
The factorial function can be extended to include all real valued arguments
excluding the negative integers as follows:
∞
z! = e−t tz dt z = −1, −2, −3, . . .
0
∞
Γ(z) = e−t tz−1 dt = (z − 1)! z = −1, −2, −3, . . .
0
3. Digamma
also known as: psi function, logarithmic derivative of the gamma function
d ln Γ(z) Γ (z)
ψ(z) = = z = −1, −2, −3, . . .
dz Γ(z)
4. Incomplete Gamma
The gamma function can be written in terms of two components as follows:
3
x
γ(z, x) = e−t tz−1 dt x>0
0
∞
Γ(z, x) = e−t tz−1 dt x>0
x
5. Beta
also known as: Euler’s first integral
1
B(y, z) = ty−1 (1 − t)z−1 dt
0
Γ(y) · Γ(z)
=
Γ(y + z)
6. Incomplete Beta
x
Bx (y, z) = ty−1 (1 − t)z−1 dt 0≤x≤1
0
Bx (y, z)
Ix (y, z) =
B(y, z)
4
Theory
Factorial Function
The classical case of the integer form of the factorial function, n!, consists of the product of
n and all integers less than n, down to 1, as follows
n(n − 1)(n − 2) . . . 3 · 2 · 1 n = 1, 2, 3, . . .
n! = (1.1)
1 n=0
where by definition, 0! = 1.
The integer form of the factorial function can be considered as a special case of two widely
used functions for computing factorials of non-integer arguments, namely the Pochham-
mer’s polynomial, given as
Γ(z + n)
z(z + 1)(z + 2) . . . (z + n − 1) = n>0
Γ(z)
(z)n = (z + n − 1)! (1.2)
=
(z − 1)!
1 = 0! n=0
While it is relatively easy to compute the factorial function for small integers, it is easy to see
how manually computing the factorial of larger numbers can be very tedious. Fortunately
given the recursive nature of the factorial function, it is very well suited to a computer and
can be easily programmed into a function or subroutine. The two most common methods
used to compute the integer form of the factorial are
direct computation: use iteration to produce the product of all of the counting numbers
between n and 1, as in Eq. 1.1
recursive computation: define a function in terms of itself, where values of the factorial
are stored and simply multiplied by the next integer value in the sequence
5
Another form of the factorial function is the double factorial, defined as
n(n − 2) . . . 5 · 3 · 1 n>0 odd
n!! = n(n − 2) . . . 6 · 4 · 2 n>0 even (1.4)
1 n = −1, 0
0!! = 1 5!! = 15
1!! = 1 6!! = 48
2!! = 2 7!! = 105
3!! = 3 8!! = 384
4!! = 8 9!! = 945
While there are several identities linking the factorial function to the double factorial, perhaps
the most convenient is
Potential Applications
n!
C(n, k) = (1.6)
k!(n − k)!
6
Gamma Function
The factorial function can be extended to include non-integer arguments through the use of
Euler’s second integral given as
∞
z! = e−t tz dt (1.7)
0
Through a simple translation of the z− variable we can obtain the familiar gamma function
as follows
∞
Γ(z) = e−t tz−1 dt = (z − 1)! (1.8)
0
The gamma function is one of the most widely used special functions encountered in advanced
mathematics because it appears in almost every integral or series representation of other
advanced mathematical functions.
Let’s first establish a direct relationship between the gamma function given in Eq. 1.8 and
the integer form of the factorial function given in Eq. 1.1. Given the gamma function
Γ(z + 1) = z! use integration by parts as follows:
u dv = uv − v du
u = tz ⇒ du = ztz−1 dt
dv = e−t dt ⇒ v = −e−t
which leads to
∞ ∞ ∞
−t −t
Γ(z + 1) = e z
t dt = −e t z
+z e−t tz−1 dt
0 0 0
7
Given the restriction of z > 0 for the integer form of the factorial function, it can be seen
that the first term in the above expression goes to zero since, when
t = 0 ⇒ tn → 0
t = ∞ ⇒ e−t → 0
Therefore
∞
Γ(z + 1) = z e−t tz−1 dt = z Γ(z), z>0 (1.9)
0
Γ(z)
∞
∞
Γ(1) = 0! = e−t dt = −e−t 0 = 1
0
and in turn
Γ(2) = 1 Γ(1) = 1 · 1 = 1!
Γ(3) = 2 Γ(2) = 2 · 1 = 2!
Γ(4) = 3 Γ(3) = 3 · 2 = 3!
Γ(n + 1) = n! n = 1, 2, 3, . . . (1.10)
8
The gamma function constitutes an essential extension of the idea of a factorial, since the
argument z is not restricted to positive integer values, but can vary continuously.
Γ(z + 1)
Γ(z) =
z
From the above expression it is easy to see that when z = 0, the gamma function approaches
∞ or in other words Γ(0) is undefined.
Given the recursive nature of the gamma function, it is readily apparent that the gamma
function approaches a singularity at each negative integer.
However, for all other values of z, Γ(z) is defined and the use of the recurrence relationship
for factorials, i.e.
Γ(z + 1) = z Γ(z)
effectively removes the restriction that x be positive, which the integral definition of the
factorial requires. Therefore,
Γ(z + 1)
Γ(z) = , z = 0, −1, −2, −3, . . . (1.11)
z
Several other definitions of the Γ-function are available that can be attributed to the pio-
neering mathematicians in this area
Gauss
n! nz
Γ(z) = lim , z = 0, −1, −2, −3, . . . (1.12)
n→∞ z(z + 1)(z + 2) . . . (z + n)
Weierstrass
∞
1 z
=ze γ·z
1+ e−z/n (1.13)
Γ(z) n=1
n
9
15
10
z 5
0
5
10
15
4 2 0 2 4
z
1 √
10 − 1 = 0.57721 73 . . .
3
γ=
2
Other forms of the gamma function are obtained through a simple change of variables, as
follows
∞
y 2z−1 e−y dy
2
Γ(z) = 2 by letting t = y 2 (1.15)
0
1 z−1
1
Γ(z) = ln dy by letting e−t = y (1.16)
0 y
10
Relations Satisfied by the Γ-Function
Recurrence Formula
Duplication Formula
2z−1
1 √
2 Γ(z) Γ z + = π Γ(2z) (1.18)
2
Reflection Formula
π
Γ(z) Γ(1 − z) = (1.19)
sin πz
∞ √
e−y dy =
2
Γ(1/2) = (−1/2)! = 2 π (1.20)
0
I
11
Combining the results of Eq. 1.20 with the recurrence formula, we see
√
Γ(1/2) = π
√
1 π
Γ(3/2) = Γ(1/2) =
2 2
√ √
3 3 π 3 π
Γ(5/2) = Γ(3/2) = =
2 2 2 4
..
.
1 1 · 3 · 5 · · · (2n − 1) √
Γ n+ = π n = 1, 2, 3, . . .
2 2n
For z > 0, Γ(z) has a single minimum within the range 1 ≤ z ≤ 2 at 1.46163 21450
where Γ(z) = 0.88560 31944. Some selected 10 decimal place values of Γ(z) are found in
Table 1.1.
z Γ(z)
For other values of z (z = 0, −1, −2. . . . ), Γ(z) can be computed by means of the
recurrence formula.
12
Approximations
Asymptotic expansions of the factorial and gamma functions have been developed for
z >> 1. The expansion for the factorial function is
√
z! = Γ(z + 1) = 2πz z z e−z A(z) (1.21)
where
1 1 139 571
A(z) = 1 + + − − + ··· (1.22)
12z 288z 2 51840z 3 2488320z 4
1 1 1 1 1
ln Γ(z) = z− ln z − z + ln(2π) + − +
2 2 12z 360z 3 1260z 5
1
− + ··· (1.23)
1680z 7
The absolute value of the error is less than the absolute value of the first term neglected.
For large values of z, i.e. as z → ∞, both expansions lead to Stirling’s Formula, given as
√
z! = 2π z z+1/2 e−z (1.24)
Even though the asymptotic expansions in Eqs. 1.21 and 1.23 were developed for very large
values of z, they give remarkably accurate values of z! and Γ(z) for small values of z. Table
1.2 shows the relative error between the asymptotic expansion and known accurate values
for arguments between 1 ≤ z ≤ 7, where the relative error is defined as
13
Table 1.2: Comparison of Approximate value of z! by Eq. 1.21 and Γ(z) by Eq. 1.23 with
the Accurate values of Mathematica 5.0
The asymptotic expansion for Γ(z) converges very quickly to give accurate values for rela-
tively small values of z. The asymptotic expansion for z! converges less quickly and does
not yield 9 decimal place accuracy even when z = 7.
More accurate values of Γ(z) for small z can be obtained by means of the recurrence formula.
For example, if we want Γ(1+z) where 0 ≤ z ≤ 1, then by means of the recurrence formula
we can write
Γ(n + z)
Γ(1 + z) = (1.25)
(1 + z)(2 + z)(3 + z) . . . (n − 1 + z)
Γ(5.3)
Γ(1 + 0.3) = = 0.89747 0699
(1.3)(2.3)(3.3)(4.3)
This value can be compared with the 10 decimal place value given previously in Table 1.1.
We observe that the absolute error is approximately 3 × 10−9 . Comparable accuracy can
be obtained by means of the above equation with n = 6 and 0 ≤ z ≤ 1.
14
Polynomial Approximation of Γ(z + 1) within 0 ≤ z ≤ 1
Numerous polynomial approximations which are based upon the use of Chebyshev polyno-
mials and the minimization of the maximum absolute error have been developed for varying
degrees of accuracy. One such approximation developed for 0 ≤ z ≤ 1 due to Hastings8 is
Γ(z + 1) = z!
where
|(z)| ≤ 3 × 10−7
15
Series Expansion of 1/Γ(z) for |z| ≤ ∞
The function 1/Γ(z) is an entire function defined for all values of z. It can be expressed as
a series expansion according to the relationship
1
∞
= Ck z k , |z| ≤ ∞ (1.27)
Γ(z) k=1
where the coefficients Ck for 0 ≤ k ≤ 26, accurate to 16 decimal places are tabulated in
Abramowitz and Stegun1 . For 10 decimal place accuracy one can write
1
19
= Ck z k (1.28)
Γ(z) k=1
k Ck k Ck
16
Potential Applications
1. Gamma Distribution: The probability density function can be defined based on the
Gamma function as follows:
1
f (x, α, β) = xα−1 e−x/β
Γ(α)β α
17
Digamma Function
The digamma function is the regularized (normalized) form of the logarithmic derivative of
the gamma function and is sometimes referred to as the psi function.
d ln Γ(z) Γ (z)
ψ(z) = = (1.29)
dz Γ(z)
The digamma function is shown in Figure 1.2 for a range of arguments between −4 ≤ z ≤ 4.
20
10
Ψz
10
20
4 2 0 2 4
z
The ψ-function satisfies relationships which are obtained by taking the logarithmic derivative
of the recurrence, reflection and duplication formulas of the Γ-function. Thus
1
ψ(z + 1) = + ψ(z) (1.30)
z
ψ(1 − z) − ψ(z) = π cot(π z) (1.31)
These formulas may be used to obtain the following special values of the ψ-function:
18
where γ is the Euler-Mascheroni constant defined in Eq. (1.14). Using Eq. (1.30)
n
1
ψ(n + 1) = −γ + n = 1, 2, 3, . . . (1.34)
k=1
k
n
1
ψ(n + 1/2) = −γ − 2 ln 2 + 2 , n = 1, 2, 3, . . . (1.36)
k=1
2k − 1
The ψ-function has simple representations in the form of definite integrals involving the
variable z as a parameter. Some of these are listed below.
1
ψ(z) = −γ + (1 − t)−1 (1 − tz−1 ) dt, z>0 (1.37)
0
1
ψ(z) = −γ − π cot(π z) + (1 − t)−1 (1 − t−z ) dt, z<1 (1.38)
0
∞
e−t e−zt
ψ(z) = − dt, z>0 (1.39)
0 t 1 − e−t
−t∞
dt
ψ(z) = e − (1 + t)−z , z>0
0 t
∞
dt
= −γ + (1 + t)−1 − (1 + t)−z , z>0 (1.40)
0 t
∞
1 1
ψ(z) = ln z + − e−zt dt, z>0
0 t 1 − e −t
∞
1 1 1 1
= ln z − − − − e−zt dt, z>0 (1.41)
2z 0 1−e −t t 2
19
Series Representation of ψ(z)
∞
1 1
ψ(z) = −γ − − z = −1, −2, −3, . . . (1.42)
k=0
z+k 1+k
1
∞
1
ψ(x) = −γ − +x z = −1, −2, −3, . . . (1.43)
x k=1
k(z + k)
∞
1 1
ψ(z) = ln z − − ln 1 + z = −1, −2, −3, . . . (1.44)
k=0
z+k z+k
1
∞
B2n
ψ(z) = ln z − − z→∞ (1.45)
2z n=1
2nz 2n
B0 = 1 B6 = 1/42
B2 = 1/6 B8 = −1/30 (1.46)
B4 = −1/30 B10 = 5/66
1 1 1 1
ψ(z) = ln z − − + − + ··· z→∞ (1.47)
2z 12z 2 120z 4 252z 6
20
The Incomplete Gamma Function γ(z, x), Γ(z, x)
We can generalize the Euler definition of the gamma function by defining the incomplete
gamma function γ(z, x) and its compliment Γ(z, x) by the following variable limit integrals
x
γ(z, x) = e−t tz−1 dt z>0 (1.48)
0
and
∞
Γ(z, x) = e−t tz−1 dt z>0 (1.49)
x
so that
Figure 1.3 shows plots of γ(z, x), Γ(z, x) and Γ(z) all regularized with respect to Γ(z).
We can clearly see that the addition of γ(z, x)/Γ(z) and Γ(z, x)/Γ(z) leads to a value of
unity or Γ(z)/Γ(z) for each value of z.
Some special values, integrals and series are listed below for convenience
n
xk
γ(1 + n, x) = n! 1 − e−x n = 0, 1, 2, . . . (1.51)
k=0
k!
∞
xk
−x
Γ(1 + n, x) = n! e n = 0, 1, 2, . . . (1.52)
k=0
k!
(−1)n
n−1
k!
Γ(−n, x) = Γ(0, x) − e−x (−1)k n = 1, 2, 3 . . . (1.53)
n! k=0
xk+1
21
Γz,xz, z 1, 2, 3, 4
Γz,xz
0.8
0.6
0.4
0.2
0 2 4 6 8 10
x
z,xz, a 1, 2, 3, 4
1
z,xz
0.8
0.6
0.4
0.2
0 2 4 6 8 10
x
zz, a 1, 2, 3, 4
1
0.8
xa
0.6
0.4
0.2
0 2 4 6 8 10
x
Figure 1.3: Plot of the Incomplete Gamma Function where
γ(z, x) Γ(z, x) Γ(z)
+ =
Γ(z) Γ(z) Γ(z)
22
Integral Representations of the Incomplete Gamma Functions
π
z
γ(z, x) = x cosec(π z) ex cos θ cos(zθ + x sin θ) dθ
0
x = 0, z > 0, z = 1, 2, . . . (1.54)
e−x xz ∞
e−t t−z
Γ(z, x) = dt z < 1, x > 0 (1.55)
Γ(1 − z) 0 x+t
∞
z −xy
Γ(z, xy) = y e e−ty (t + x)z−1 dt y > 0, x > 0, z > 1 (1.56)
0
∞
(−1)n xz+n
γ(z, x) = (1.57)
n=0
n! (z + n)
∞
(−1)n xz+n
Γ(z, x) = Γ(z) − (1.58)
n=0
n! (z + n)
∞
Lz (x)
Γ(z + x) = e−x xz n
x>0 (1.59)
n=0
n+1
Γ(z + n, x) Γ(z, x)
n−1
xz+k
−x
= + e (1.62)
Γ(z + n) Γ(z) k=0
Γ(z + k + 1)
dγ(z, x) dΓ(z, x)
= − = xz−1 e−x (1.63)
dx dx
23
Asymptotic Expansion of Γ(z, x) for Large x
−x
(z − 1) (z − 1)(z − 2)
Γ(z, x) = x z−1
e 1+ + + ··· x→∞ (1.64)
x x2
e−x xz
Γ(z, x) = (1.65)
1−z
z+
1
1+
2−z
x+
2
1+
3−z
x+
1 + ...
24
Beta Function B(a, b)
Another definite integral which is related to the Γ-function is the Beta function B(a, b)
which is defined as
1
B(a, b) = ta−1 (1 − t)b−1 dt, a > 0, b > 0 (1.72)
0
The relationship between the B-function and the Γ-function can be demonstrated easily. By
means of the new variable
t
u=
(1 − t)
∞
ua−1
B(a, b) = du a > 0, b > 0 (1.73)
0 (1 + u)a+b
∞
Γ(z)
e−pt tz−1 dt = (1.74)
0 pz
which is obtained from the definition of the Γ-function with the change of variable s = pt.
Setting p = 1 + u and z = a + b, we get
∞
1 1
= e−(1+u)t ta+b−1 dt (1.75)
(1 + u)a+b Γ(a + b) 0
and substituting this result into the Beta function in Eq. 1.73 gives
∞ ∞
1 −t
B(a, b) = e t a+b−1
dt e−ut ua−1 du
Γ(a + b) 0 0
∞
Γ(a)
= e−t tb−1 dt
Γ(a + b) 0
Γ(a) · Γ(b)
= (1.76)
Γ(a + b)
25
Betay,.5
15
10
By,z 5
5
10
4 2 0 2 4
y
All the properties of the Beta function can be derived from the relationships linking the
Γ-function and the Beta function.
Other forms of the beta function are obtained by changes of variables. Thus
∞
ua−1 du u
B(a, b) = by t = (1.77)
0 (1 + u)a+b 1−u
π/2
B(a, b) = 2 sin2a−1 θ cos2a−1 θ dθ by t = sin2 θ (1.78)
0
Potential Applications
1. Beta Distribution: The Beta distribution is the integrand of the Beta function. It can
be used to estimate the average time of completing selected tasks in time management
problems.
26
x
Bx (a, b) = ta−1 (1 − t)b−1 dt 0≤x≤1 (1.79)
0
Bx (a, b)
Ix (a, b) = (1.80)
B(a, b)
I1 (a, b) = 1
The incomplete beta function and Ix (a, b) satisfies the following relationships:
Symmetry
Recurrence Formulas
27
Beta.25, y, .5
40
20
0
Bx,y,z 20
40
60
80
4 2 0 2 4
y
Beta.75, y, .5
10
5
Bx,y,z
5
10
4 2 0 2 4
y
Beta1, y, .5
15
10
5
Bx,y,z
5
10
4 2 0 2 4
y
Figure 1.5: Plot of the Incomplete Beta Function
28
Assigned Problems
Problem Set for Gamma and Beta Functions
1. Use the definition of the gamma function with a suitable change of variable to prove
that
∞
1
i) e−ax xn dx = n+1 Γ(n + 1) with n > −1, a > 0
0 a
∞ √
π
ii) exp(2ax − x ) dx =
2
exp(a2 )
a 2
2. Prove that
π/2 π/2
√
n n
π Γ([1 + n]/2)
sin θ dθ = cos θ dθ =
0 0 2 Γ([2 + n]/2)
3. Show that
1 1 π
Γ +x Γ −x =
2 2 cos πx
1 7
4. Evaluate Γ − and Γ − .
2 2
5. Show that the area enclosed by the axes x = 0, y = 0 and the curve x4 + y 4 = 1 is
2
1
Γ
4
√
8 π
Use both the Dirichlet integral and a conventional integration procedure to substantiate
this result.
29
6. Express each of the following integrals in terms of the gamma and beta functions and
simplify when possible.
1 1/4
1
i) −1 dx
0 x
b
ii) (b − x)m−1 (x − a)n−1 dx, with b > a, m > 0, n > 0
a
∞
dt
iii) √
0 (1 + t) t
Note: Validate your results using various solution procedures where possible.
2
1
Γ
A 1 n
=
4ab 2n 2
Γ
n
for n = 0.2, 0.4, 0.8, 1.0, 2.0, 4.0, 8.0, 16.0, 32.0, 64.0, 100.0
a) the first quadrant area bounded by the curve and two axes
b) the centroid (x, y) of this area
c) the volume generated when the area is revolved about the y−axis
d) the moment of inertia of this volume about its axis
Note: Validate your results using various solution procedures where possible.
9. Starting with
1 ∞
e−t dt
Γ = √
2 0 t
30
2 ∞ ∞
1
Γ =4 exp −(x2 + y 2 ) dx dy
2 0 0
Further prove that the above double integral over the first quadrant when evaluated
using polar coordinates (r, θ) yields
1 √
Γ = π
2
31
References
1. Abramowitz, M. and Stegun, I.A., (Eds.), “Gamma (Factorial) Function” and
“Incomplete Gamma Function.” §6.1 and §6.5 in Handbook of Mathematical Functions
and Formulas, Graphs and Mathematical Tables, 9th printing, Dover, New York, 1972,
pp. 255-258 and pp 260-263.
2. Andrews, G.E., Askey, R. and Roy, R., Special Functions, Cambridge University
Press, Cambridge, 1999.
3. Artin, E. The Gamma Function, Holt, Rinehart, and Winston, New York, 1964.
4. Barnew, E.W., “The Theory of the Gamma Function,” Messenger Math., (2), Vol.
29, 1900, pp.64-128..
5. Borwein, J.M. and Zucker, I.J., “Elliptic Integral Evaluation of the Gamma Func-
tion at Rational Values and Small Denominator,” IMA J. Numerical Analysis, Vol.
12, 1992, pp. 519-526.
6. Davis, P.J., “Leonhard Euler’s Integral: Historical profile of the Gamma Function,”
Amer. Math. Monthly, Vol. 66, 1959, pp. 849-869.
7. Erdelyl, A., Magnus, W., Oberhettinger, F. and Tricomi, F.G., “The Gamma
Function,” Ch. 1 in Higher Transcendental Functions, Vol. 1, Krieger, New York,
1981, pp. 1-55.
8. Hastings, C., Approximations for Digital Computers, Princeton University Press,
Princeton, NJ, 1955.
9. Hochstadt, H., Special Functions of Mathematical Physics, Holt, Rinehart and Win-
ston, New York, 1961.
10. Koepf, W.. “The Gamma Function,” Ch. 1 in Hypergeometric Summation: An
Algorithmic Approach to Summation and Special Identities, Vieweg, Braunschweig,
Germany, 1998, pp. 4-10.
11. Krantz, S.C., “The Gamma and Beta Functions,” §13.1 in Handbook of Complex
Analysis, Birkhauser, Boston, MA, 1999, pp. 155-158.
12. Legendre, A.M., Memoires de la classe des sciences mathematiques et physiques de
l’Institut de France, Paris, 1809, p. 477, 485, 490.
13. Magnus, W. and Oberhettinger, F., Formulas and Theorems for the Special Func-
tions of Mathematical Physics, Chelsea, New York, 1949.
14. Saibagki, W., Theory and Applications of the Gamma Function, Iwanami Syoten,
Tokyo, Japan, 1952.
15. Spanier, J. and Oldham, K.B., “The Gamma Function Γ(x)” and “The Incomplete
Gamma γ(ν, x) and Related Functions,” Chs. 43 and 45 in An Atlas of Functions,
Hemisphere, Washington, DC, 1987, pp. 411-421 and pp. 435-443.
32
Error and Complimentary Error Functions
Reading Problems
Outline
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Gaussian function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Error function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Complementary Error function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Assigned Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1
Background
The error function and the complementary error function are important special functions
which appear in the solutions of diffusion problems in heat, mass and momentum transfer,
probability theory, the theory of errors and various branches of mathematical physics. It
is interesting to note that there is a direct connection between the error function and the
Gaussian function and the normalized Gaussian function that we know as the “bell curve”.
The Gaussian function is given as
G(x) = Ae−x
2 /(2σ 2 )
The Gaussian function can be normalized so that the accumulated area under the curve is
unity, i.e. the integral from −∞ to +∞ equals 1. If we note that the definite integral
∞
−ax2
π
e dx =
−∞ a
1
e−x
2 /(2σ 2 )
G(x) = √
2πσ
If we let
x2 1
2
t = and dt = √ σ dx
2σ 2 2
x
1
G(x) dx = √ e−t dt
2
−x π
or recognizing that the normalized Gaussian is symmetric about the y−axis, we can write
2
x x
2 −t2
x
G(x) dx = √ e dt = erf x = erf √
0 π 0 2σ
∞
2
e−t dt
2
erfc x = 1 − erf x = √
π x
Historical Perspective
The normal distribution was first introduced by de Moivre in an article in 1733 (reprinted in
the second edition of his Doctrine of Chances, 1738 ) in the context of approximating certain
binomial distributions for large n. His result was extended by Laplace in his book Analytical
Theory of Probabilities (1812 ), and is now called the Theorem of de Moivre-Laplace.
Laplace used the normal distribution in the analysis of errors of experiments. The important
method of least squares was introduced by Legendre in 1805. Gauss, who claimed to have
used the method since 1794, justified it in 1809 by assuming a normal distribution of the
errors.
The name bell curve goes back to Jouffret who used the term bell surface in 1872 for a
bivariate normal with independent components. The name normal distribution was coined
independently by Charles S. Peirce, Francis Galton and Wilhelm Lexis around 1875 [Stigler].
This terminology is unfortunate, since it reflects and encourages the fallacy that “everything
is Gaussian”.
3
Definitions
1. Gaussian Function
The normalized Gaussian curve represents the probability distribution with standard
distribution σ and mean µ relative to the average of a random distribution.
1
e−(x−µ)
2 /(2σ 2 )
G(x) = √
2πσ
This is the curve we typically refer to as the “bell curve” where the mean is zero and
the standard distribution is unity.
2. Error Function
The error function
√ equals twice the integral of a normalized Gaussian function between
0 and x/σ 2.
x
2
e−t dt
2
y = erf x = √ for x ≥ 0, y [0, 1]
π 0
where
x
t= √
2σ
∞
2
e−t dt
2
1 − y = erfc x = 1 − erf x = √ for x ≥ 0, y [0, 1]
π x
x = inerf y
4
inerf y exists for y in the range −1 < y < 1 and is an odd function of y with a
Maclaurin expansion of the form
∞
inverf y = cn y 2n−1
n=1
x = inerfc (1 − y)
5
Theory
Gaussian Function
The Gaussian function or the Gaussian probability distribution is one of the most fundamen-
tal functions. The Gaussian probability distribution with mean µ and standard deviation σ
is a normalized Gaussian function of the form
1
e−(x−µ)
2 /(2σ 2 )
G(x) = √ (2.1)
2πσ
where G(x), as shown in the plot below, gives the probability that a variate with a Gaussian
distribution takes on a value in the range [x, x + dx]. Statisticians commonly call this
distribution the normal distribution and, because of its shape, social scientists refer to it as
the “bell curve.” G(x) has been normalized so that the accumulated area under the curve
between −∞ ≤ x ≤ +∞ totals to unity. A cumulative distribution function, which totals
the area under the normalized distribution curve is available and can be plotted as shown
below.
Gx Dx
x x
4 2 2 4 4 2 2 4
When the mean is set to zero (µ = 0) and the standard deviation or variance is set to unity
(σ = 1), we get the familiar normal distribution
1
e−x /2 dx
2
G(x) = √ (2.2)
2π
which is shown in the curve below. The normal distribution function N (x) gives the prob-
ability that a variate assumes a value in the interval [0, x]
x
1
e−t
2 /2
N (x) = √ dt (2.3)
2π 0
6
Nx
0.4
0.3
0.2
0.1
x
4 2 2 4
Gaussian distributions have many convenient properties, so random variates with unknown
distributions are often assumed to be Gaussian, especially in physics, astronomy and various
aspects of engineering. Many common attributes such as test scores, height, etc., follow
roughly Gaussian distributions, with few members at the high and low ends and many in
the middle.
Potential Applications
1. Statistical Averaging:
7
Error Function
The error function is obtained by integrating the normalized Gaussian distribution.
x
2
e−t dt
2
erf x = √ (2.4)
π 0
where the coefficient in front of the integral normalizes erf (∞) = 1. A plot of erf x over
the range −3 ≤ x ≤ 3 is shown as follows.
0.5
erfx
0.5
1
3 2 1 0 1 2 3
x
The error function is defined for all values of x and is considered an odd function in x since
erf x = −erf (−x).
The error function can be conveniently expressed in terms of other functions and series as
follows:
1 1
erf x = √ γ ,x 2
(2.5)
π 2
2x 1 3 2x −x2 3 2
= √ M , , −x2
= √ e M 1, , x (2.6)
π 2 2 π 2
2 (−1)nx2n+1
∞
= √ (2.7)
π n=0 n!(2n + 1)
where γ(·) is the incomplete gamma function, M(·) is the confluent hypergeometric function
of the first kind and the series solution is a Maclaurin series.
8
Computer Algebra Systems
Potential Applications
∂ 2T 1 ∂T
=
∂x2 α ∂t
T (x, t) − Ts x
= erfc √
Ti − Ts 2 αt
9
Complimentary Error Function
The complementary error function is defined as
erfc x = 1 − erf x
∞
2
e−t dt
2
= √ (2.8)
π x
1.5
erfx
0.5
3 2 1 0 1 2 3
x
and similar to the error function, the complimentary error function can be written in terms
of the incomplete gamma functions as follows:
1 1
erfc x = √ Γ ,x 2
(2.9)
π 2
As shown in Figure 2.5, the superposition of the error function and the complimentary error
function when the argument is greater than zero produces a constant value of unity.
Potential Applications
1. Diffusion: In a similar manner to the transient conduction problem described for the
error function, the complimentary error function is used in the solution of the diffusion
equation when the boundary conditions are constant surface heat flux, where qs = q0
2q0 (αt/π)1/2 −x2 q0 x x
T (x, t) − Ti = exp − erfc √
k 4αt k 2 αt
10
1 Erf x Erfc x
0.8 Erf x
erfx
0.6
0.4
0.2 Erfc x
∂T
and surface convection, where −k = h[T∞ − T (0, t)]
∂x x=0
√
T (x, t) − Ti x hx h2 αt x h αt
= erfc √ − exp + erfc √ +
T∞ − Ti 2 αt k k2 2 αt k
11
Relations and Selected Values of Error Functions
erf 0 = 0 erfc 0 = 1
erf ∞ = 1 erfc ∞ = 0
erf (−∞) = −1
∞
√
erfc x dx = 1/ π
0
∞ √ √
erfc2 x dx = (2 − 2)/ π
0
Ten decimal place values for selected values of the argument appear in Table 2.1.
x erf x x erf x
12
Approximations
Since
∞
2 x
−t2
2 x
(−1)nt2n
erf x = √ e dt = √ dt (2.10)
π 0 π 0 n=0
n!
and the series is uniformly convergent, it may be integrated term by term. Therefore
2 (−1)nx2n+1
∞
erf x = √ (2.11)
π n=0 (2n + 1)n!
2 x x3 x5 x7 x9
= √ − + − + − ··· (2.12)
π 1 · 0! 3 · 1! 5 · 2! 7 · 3! 9 · 4!
Since
∞ ∞
2 −t2
2 1
e−t t dt
2
erfc x = √ e dt = √
π x π x t
1
dv = e−t d dt
2
u =
t
1
e−t
2
du = −t−2 dt v = −
2
therefore
∞ ∞ −t2
∞
1 −t2
∞
1 −t2 ∞ 1e
e t dt = uv − v du = − e − 2
dt
x t x x 2t x x 2 t
13
Thus
∞
e−t
2
2 1 −x2
1
erfc x = √ e − dt (2.13)
π 2x 2 x t2
√ 2
∞
1 · 3 · 5 · · · (2n − 1)
πxex erfc x = 1 + (−1)n (2.15)
n=1
(2x2 )n
This series does not converge, since the ratio of the nth term to the (n−1)th does not remain
less than unity as n increases. However, if we take n terms of the series, the remainder,
∞
e−t
2
1 · 3 · · · (2n − 1)
dt
2n x t2n
∞
e−t ∞
2
−x2
dt
dt < e <
x t2n 0 t2n
We can therefore stop at any term taking the sum of the terms up to this term as an
approximation of the function. The error will be less in absolute value than the last term
retained in the sum. Thus for large x, erfc x may be computed numerically from the
asymptotic expansion.
√
∞
1 · 3 · 5 · · · (2n − 1)
x2
πxe erfc x = 1 + (−1)n
n=1
(2x2 )n
1 1·3 1·3·5
= 1− + − + ··· (2.16)
2x2 (2x2 )2 (2x2 )3
14
Some other representations of the error functions are given below:
2 −x2 x2n+1
∞
erf x = √ e (2.17)
π n=0
(3/2)n
2x 1 3
= √ M , , −x 2
(2.18)
π 2 2
2x −x2 3 2
= √ e M 1, , x (2.19)
π 2
1 1 2
= √ γ ,x (2.20)
π 2
1 1 2
erfc x = √ Γ ,x (2.21)
π 2
The symbols γ and Γ represent the incomplete gamma functions, and M denotes the con-
fluent hypergeometric function or Kummer’s function.
d3 d 2 2
− √ (2x)e−x = √ (4x2 − 2)e−x
2 2
erfc x = (2.24)
dx3 dx π π
dn+1 2
Hn(x) e−x
2
n+1
erf x = (−1)n
√ (n = 0, 1, 2 . . . ) (2.25)
dx π
15
Repeated Integrals of the Complementary Error Function
∞
n
i erfc x = in−1 erfc t dt n = 0, 1, 2, . . . (2.26)
x
where
2
i−1 erfc x = √ e−x
2
(2.27)
π
1
= √ exp(−x2 ) − x erfc x (2.29)
π
∞
2
i erfc x = i erfc t dt
x
1 2
= (1 + 2x ) erfc x − √ x exp(−x )
2 2
4 π
1
= [erfc x − 2x · ierfc x] (2.30)
4
1
inerfc 0 − (n = −1, 0, 1, 2, 3, . . . ) (2.32)
n
n
2 Γ 1+
2
d2 y dy
+ 2x − 2ny = 0 (2.33)
dx2 dx
16
The general solution of
is of the form
d
[inerfc x] = (−1)n−1 erfc x (n = 0, 1, 2, 3 . . . ) (2.36)
dx
dn 2
2
ex erfc x = (−1)n2nn!ex inerfc x (n = 0, 1, 2, 3 . . . ) (2.37)
dxn
17
Some Integrals Associated with the Error Function
e−t
x2 √
√ dt = π erf x (2.38)
0 t
x √
−t y
π
e dt = erf x (2.39)
0 2y
e−t
1 2 x2
π 2
dt = ex 1 − {erf x}2 (2.40)
0 1 + t2 2
√
∞
e−t x π √
√ dt = √ exy erfc ( xy) x>0 (2.41)
0 y+t x
∞
e−t √
2x
π 2
dt = exy erfc ( xy) x > 0, y > 0 (2.42)
0 t2 + y2 2y
∞
e−tx π
√ dt = √ exy erfc (xy) x > 0, y = 0 (2.43)
0 (t + y) t y
∞ ! √
y
e−t x
erf ( yt) dt = (x + y)−1/2 (x + y) > 0 (2.44)
0 x
∞ ! 1 √
e−t x erf ( y/t dt = e−2 xy
x > 0, y > 0 (2.45)
0 x
∞
erfc (t) dt = ierfc (a) + 2a = ierfc (−a) (2.46)
−a
a
erf (t) dt = 0 (2.47)
−a
a
erfc(t) dt = 2a (2.48)
−a
∞
1
ierfc (t) dt = i2 erfc (−a) = + a − i2 erfc (a) (2.49)
−a 2
∞
n
t+c n+1
a+c
i erfc dt = bi erfc (2.50)
a b b
18
Numerical Computation of Error Functions
The power series form of the error function is not recommended for numerical computations
when the argument approaches and exceeds the value x = 2 because the large alternat-
ing terms may cause cancellation, and because the general term is awkward to compute
recursively. The function can, however, be expressed as a confluent hypergeometric series.
2 −x2
3 2
erf x = √ x e M 1, , x (2.51)
π 2
in which all terms are positive, and no cancellation can occur. If we write
∞
erf x = b an 0≤x≤2 (2.52)
n=0
with
2x x2
b = √ e−x
2
a0 = 1 an = an−1 n≥1
π (2n + 1)/2
then erf x can be computed very accurately (e.g. with an absolute error less that 10−9 ).
Numerical experiments show that this series can be used to compute erf x up to x = 5
to the required accuracy; however, the time required for the computation of erf x is much
greater due to the large number of terms which have to be summed. For x ≥ 2 an alternate
method that is considerably faster is recommended which is based upon the asymptotic
expansion of the complementary error function.
∞
2
e−t dt
2
erfc x = √
π x
e−x
2
1 1
= √ 2 Fo , 1, − x→∞ (2.53)
πx 2 x2
which cannot be used to obtain arbitrarily accurate values for any x. An expression that
converges for all x > 0 is obtained by converting the asymptotic expansion into a continued
fraction
19
√ 2 1
πex erfc x = x>0 (2.54)
1/2
x+
1
x+
3/2
x+
2
x+
5/2
x+
x + ...
e−x
2
1 1/2 1 3/2 2
erfc x = √ ··· x>0 (2.55)
π x+ x+ x+ x+ x+
It can be demonstrated experimentally that for x ≥ 2 the 16th approximant gives erfc x
with an absolute error less that 10−9 . Thus we can write
e−x
2
1 1/2 1 3/2 8
erfc x = √ ··· x≥2 (2.56)
π x+ x+ x+ x+ x
Using a fixed number of approximants has the advantage that the continued fraction can be
evaluated rapidly beginning with the last term and working upward to the first term.
20
Rational Approximations of the Error Functions (0 ≤ x < ∞)
Numerous rational approximations of the error functions have been developed for digital
computers. The approximations are often based upon the use of economized Chebyshev
polynomials and they give values of erf x from 4 decimal place accuracy up to 24 decimal
place accuracy.
where
1
t=
1 + px
p = 0.47047
a1 = 0.3480242
a2 = −0.0958798
a3 = 0.7478556
This approximation has a maximum absolute error of |(x)| < 2.5 × 10−5 .
Another more accurate rational approximation has been developed for example
where
1
t=
1 + px
21
and the coefficients are
p = 0.3275911
a1 = 0.254829592
a2 = −0.284496736
a3 = 1.421413741
a4 = −1.453152027
a5 = 1.061405429
This approximation has a maximum absolute error of |(x)| < 1.5 × 10−7 .
22
Assigned Problems
Problem Set for Error and Due Date: February 12, 2004
Complementary Error Function
1. Evaluate the following integrals to four decimal places using either power series, asymp-
totic series or polynomial approximations:
2 0.002
−x2
e−x dx
2
a) e dx b)
0 0.001
∞ 10
2 −x2
2
e−x dx
2
c) √ e dx d) √
π 1.5 π 5
1.5 ∞
1 −x2
2 1 −x2
e) e dx f) e dx
1 2 π 1 2
2. The value of erf 2 is 0.995 to three decimal places. Compare the number of terms
required in calculating this value using:
Compare the approximate errors in each case after two terms; after ten terms.
3. For the function ierfc(x) compute to four decimal places when x = 0, 0.2, 0.4, 0.8,
and 1.6.
4. Prove that
√ 1 2
i) π erf (x) = γ,x
2
√ 1 2
ii) π erfc(x) = Γ ,x
2
1 2
1 2
where γ ,x and Γ ,x are the incomplete Gamma functions defined as:
2 2
23
y
γ(a, y) = e−uua−1 du
0
and
∞
Γ(a, y) = e−uua−1 du
y
√
5. Show that θ(x, t) = θ0 erfc(x/2 αt) is the solution of the following diffusion
problem:
∂ 2θ 1 ∂θ
= x ≥ 0, t > 0
∂x2 α ∂t
and
θ(0, t) = θ0 , constant
θ(x, t) → 0 as x → ∞
√
6. Given θ(x, t) = θ0 erf x/2 αt:
∂θ ∂θ
i) Obtain expressions for and at any x and all t > 0
∂t ∂x
√
π x ∂θ
ii) For the function
2 θ0 ∂x
√ √
show√that it has a maximum value when x/2 αt = 1/ 2 and the maximum value
is 1/ 2e .
7. Given the transient point source solution valid within an isotropic half space
q √
T = erfc(r/2 αt), dA = r dr dθ
2πkr
24
derive the expression for the transient temperature rise at the centroid of a circular
area (πa2 ) which is subjected to a uniform and constant heat flux q. Superposition of
point source solutions allows one to write
a 2π
T0 = T dA
0 0
8. For a dimensionless time F o < 0.2 the temperature distribution within an infinite
plate −L ≤ x ≤ L is given approximately by
T (ζ, F o) − Ts 1−ζ 1+ζ
=1− erfc √ + erfc √
T0 − Ts 2 Fo 2 Fo
1
T = T (ζ, F o) dζ
0
The initial and surface plate temperature are denoted by T0 and Ts , respectively.
3
(2n − 1) − ζ (2n − 1) + ζ
θ(ζ, F o) = 1 − (−1) n+1
erfc √ + erfc √
n=1 2 Fo 2 Fo
3
2(−1)n+1
e−δnF o cos(δnζ)
2
θ(ζ, F o) =
n=1
δn
25
Table 1: Exact values of θ(0, F o) for the Infinite Plate
Fo θ(0, F o)
0.02 1.0000
0.06 0.9922
0.10 0.9493
0.40 0.4745
1.0 0.1080
2.0 0.0092
26
References
1. Abramowitz, M. and Stegun, I.A., Handbook of Mathematical Functions, Dover,
New York, 1965.
3. Hochsadt, H., Special Functions of Mathematical Physics, Holt, Rinehart and Win-
ston, New York, 1961.
4. Jahnke, E., Emdw, F. and Losch, F., Tables of Higher Functions, 6th Edition,
McGraw-Hill, New York, 1960.
7. Magnus, W., Oberhettinger, F. and Soni, R.P., Formulas and Theorems for the
Functions of Mathematical Physics, 3rd Edition, Springer-Verlag, New York, 1966.
9. Sneddon, I.N., Special Functions of Mathematical Physics and Chemistry, 2nd Edi-
tion, Oliver and Boyd, Edinburgh, 1961.
10. National Bureau of Standards, Applied Mathematics Series 41, Tables of Error
Function and Its Derivatives, 2nd Edition, Washington, DC, US Government Printing
Office, 1954.
11. Hastings, C., Approximations for Digital Computers, Princeton University Press,
Princeton, NJ, 1955.
27
Elliptic Integrals, Elliptic Functions and
Theta Functions
Reading Problems
Outline
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Elliptic Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Elliptic Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Theta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Assigned Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1
Background
This chapter deals with the Legendre elliptic integrals, the Theta functions and the Jaco-
bian elliptic functions. These elliptic integrals and functions find many applications in the
theory of numbers, algebra, geometry, linear and non-linear ordinary and partial differential
equations, dynamics, mechanics, electrostatics, conduction and field theory.
A(x) + B(x)
f (x) = ! dx
C(x) + D(x) S(x)
where A(x), B(x), C(x) and D(x) are polynomials in x and S(x) is a polynomial of
degree 3 or 4. Elliptic integrals can be viewed as generalizations of the inverse trigonometric
functions. Within the scope of this course we will examine elliptic integrals of the first and
second kind which take the following forms:
First Kind
If we let the modulus k satisfy 0 ≤ k2 < 1 (this is sometimes written in terms of the
parameter m ≡ k2 or modular angle α ≡ sin−1 k). The incomplete elliptic integral of the
first kind is written as
sin φ
dt
F (φ, k) = ! , 0 ≤ k2 ≤ 1 and 0 ≤ sin φ ≤ 1
0 (1 − t2 )(1 − k 2 t2 )
√
if we let t = sin θ and dt = cos θ dθ = 1 − t2 dθ, then
φ
dθ
F (φ, k) = ! , 0 ≤ k2 ≤ 1 and 0 ≤ φ ≤ π/2
0 1 − k2 sin2 θ
This is referred to as the incomplete Legendre elliptic integral. The complete elliptic integral
can be obtained by setting the upper bound of the integral to its maximum range, i.e.
sin φ = 1 or φ = π/2 to give
1
dt
K(k) = !
0 (1 − t2 )(1 − k2 t2 )
π/2
dθ
= !
0 1 − k2 sin2 θ
2
Second Kind
√
sin φ
1 − k 2 t2
E(φ, k) = √ dt
0 1 − t2
φ!
= 1 − k2 sin2 θ dθ
0
Similarly, the complete elliptic integral can be obtained by setting the upper bound of inte-
gration to the maximum value to get
√
1
1 − k 2 t2
E(k) = √ dt
0 1 − t2
π/2 !
= 1 − k2 sin2 t dt
0
Another very useful class of functions can be obtained by inverting the elliptic integrals. As
an example of the Jacobian elliptic function sn we can write
sin φ
dt
u(x = sin φ, k) = F (φ, k) = !
0 (1 − t2 )(1 − k2 t2 )
x = sin φ = sn(u, k)
or
sn
dt
u= !
0 (1 − t2 )(1 − k2 t2 )
While there are 12 different types of Jacobian elliptic functions based on the number of poles
and the upper limit on the elliptic integral, the three most popular are the copolar trio of sine
amplitude, sn(u, k), cosine amplitude, cn(u, k) and the delta amplitude elliptic function,
dn(u, k) where
3
sn2 + cn2 = 1 and k2 sn2 + dn2 = 1
Historical Perspective
The first reported study of elliptical integrals was in 1655 when John Wallis began to study
the arc length of an ellipse. Both John Wallis (1616-1703) and Isaac Newton (1643-1727)
published an infinite series expansion for the arc length of the ellipse. But it was not until the
late 1700’s that Legendre began to use elliptic functions for problems such as the movement
of a simple pendulum and the deflection of a thin elastic bar that these types of problems
could be defined in terms of simple functions.
Despite forty years of dedication to elliptic functions, Legendre’s work went essentially unno-
ticed by his contemporaries until 1827 when two young and as yet unknown mathematicians
Abel and Jacobi placed the subject on a new basis, and revolutionized it completely.
In 1825, the Norwegian government funded Abel on a scholarly visit to France and Germany.
Abel then traveled to Paris, where he gave an important paper revealing the double period-
icity of the elliptic functions. Among his other accomplishments, Abel wrote a monumental
work on elliptic functions7 which unfortunately was not discovered until after his death.
Jacobi wrote the classic treatise8 on elliptic functions, of great importance in mathematical
physics, because of the need to integrate second order kinetic energy equations. The motion
5
Exercises du Calcul Intgral
6
Trait des Fonctions Elliptiques
7
Abel, N.H. “Recherches sur les fonctions elliptiques.” J. reine angew. Math. 3, 160-190, 1828.
8
Jacobi, C.G.J. Fundamentia Nova Theoriae Functionum Ellipticarum. Regiomonti, Sumtibus fratrum
Borntraeger, 1829.
4
equations in rotational form are integrable only for the three cases of the pendulum, the
symmetric top in a gravitational field, and a freely spinning body, wherein solutions are in
terms of elliptic functions.
Jacobi was also the first mathematician to apply elliptic functions to number theory, for
example, proving the polygonal number theorem of Pierre de Fermat. The Jacobi theta
functions, frequently applied in the study of hypergeometric series, were named in his honor.
In developments of the theory of elliptic functions, modern authors mostly follow Karl Weier-
strass. The notations of Weierstrass’s elliptic functions based on his p-function are conve-
nient, and any elliptic function can be expressed in terms of these. The elliptic functions
introduced by Carl Jacobi, and the auxiliary theta functions (not doubly-periodic), are more
complex but important both for the history and for general theory.
5
Theory
1. Elliptic Integrals
There are three basic forms of Legendre elliptic integrals that will be examined here; first,
second and third kind. In their most general form, elliptic integrals are presented in a form
referred to as incomplete integrals where the bounds of the integral representation range
from
0 ≤ sin φ ≤ 1 or 0 ≤ φ ≤ π/2.
0 ≤ sin φ ≤ 1
π
0≤φ<
2
The parameter k is called the modulus of the elliptic integral and φ is the amplitude
angle.
π
F φ= ,k = F (sin φ = 1, k) = K(k) = K (3.3)
2
A complementary form of the elliptical integral can be obtained by letting the modulus
be
(k )2 = 1 − k2 (3.4)
6
If we let v = tan θ and in turn dv = sec2 θ dθ = (1 + v 2 ) dθ, then
tan φ
dv
F(φ, k) = #
0 v2
(1 + v 2 ) 1 − k2
1 + v2
tan φ
dv
= √ !
0 1 + v2 (1 + v 2 − k2 v 2
tan φ
dv
= ! (3.5)
0 (1 + v 2 )(1 + k v 2 )
π
F φ= ,k = F(sin φ = 1, k ) = K(k ) = K (3.6)
2
b) Second Kind:
#
sin φ
1 − k 2 t2
E(φ, k) = dt, 0 ≤ k2 ≤ 1 (3.7)
0 1 − t2
or its equivalent
φ!
E(φ, k) = 1 − k2 sin2 θ dθ, 0 ≤ k2 ≤ 1 (3.8)
0
π
0≤φ≤
2
And similarly, the complete elliptic integral of the second kind can be written as
π
E φ= ,k = E(sin φ = 1, k) = E(k) = E (3.9)
2
π
E φ= ,k = E(sin φ = 1, k ) = E(k ) = E (3.10)
2
7
c) Third Kind:
sin φ
dt
Π(φ, n, k) = ! , 0 ≤ k2 ≤ 1
0 (1 + nt)2 (1 − t2 )(1 − k 2 t2 )
(3.11)
or its equivalent
φ
dθ
Π(φ, n, k) = " , 0 ≤ k2 ≤ 1 (3.12)
0 (1 + n sin θ) (1 − k sin θ)
2 2 2
π
0≤φ≤
2
8
Computer Algebra Systems
More than any other special function, you need to be very careful about the arguments
you give to elliptic integrals and elliptic functions. There are several conventions in
common use in competing Computer Algebra Systems and it is important that you
check the documentation provided with the CAS to determine which convection is
incorporated.
Function Mathematica
Potential Applications
Determining the arc length of a circle is easily achieved using trigonometric functions,
however elliptic integrals must be used to find the arc length of an ellipse.
Tracing the arc of a pendulum can be achieved for small angles using trigonometric
functions but to determine the full path of the pendulum elliptic integrals must be
used.
9
Relations and Selected Values of Elliptic Integrals
Complete Elliptic Integrals of the First and Second Kind, K, K , E, E
The four elliptic integrals K, K , E, and E , satisfy the following identity attributed
to Legendre
π
KE + K E − KK = (3.13)
2
The elliptic integrals K and E as functions of the modulus k are connected by means
of the following equations:
dE 1
= (E − K) (3.14)
dk k
dK 1
= [E − (k )2 K] (3.15)
dk k(k )2
Incomplete Elliptic Integrals of the First and Second Kind, F (φ, k), E(φ, k)
φ
sin2 θ F−E
D(φ, k) = dθ = (3.16)
0 ∆ k2
where
!
∆= 1 − k2 sin2 θ (3.17)
Therefore
F = E + k2 D (3.18)
10
Other incomplete integrals expressed by D, E, and F are
φ
cos2 θ
dθ = F − D (3.19)
0 ∆
φ
tan2 θ ∆ tan φ − E
dθ = (3.20)
0 ∆ (k )2
φ
dθ ∆ tan φ + k2 (D − F)
= (3.21)
0 ∆ cos2 θ (k )2
φ
sin2 θ F−D sin φ cos φ
dθ = − (3.22)
0 ∆2 (k )2 (k )2 ∆
φ
cos2 θ sin φ cos φ
dθ = D + (3.23)
0 ∆2 ∆
φ
∆ tan2 θ dθ = ∆ tan φ + F − 2E (3.24)
0
E(0, k) = 0 (3.25)
F(0, k) = 0 (3.26)
E(φ, k) = φ (3.28)
F(φ, k) = φ (3.29)
11
K(0) = K (1) = π/2 (3.33)
if α2 > 0, α2 = 1
1
= [ln(tan φ + sec φ) + |α|arctan(|α| sin φ)]
1 − α2
if α2 < 0
12
Jacobi’s Nome q and Complementary Nome q1
The nome q and the complementary nome q1 are defined by
and
Therefore
ln q ln q1 = π 2 (3.46)
To compute the nome q we set k = sin α and introduce the parameter defined by
√
1− cos α
2 = √ (3.47)
1+ cos α
√
1− sin α
21 = √ (3.49)
1+ sin α
13
2. Elliptic Functions
There are several types of elliptic functions including the Weierstrass elliptic functions
as well as related theta functions but the most common elliptic functions are the
Jacobian elliptic functions, based on the inverses of the three types of elliptic integrals.
1. Jacobi elliptic functions: The three standard forms of Jacobi elliptic integrals
are denoted as sn(u, k), cn(u, k) and dn(u, k) and are the sine, cosine and
delta amplitude elliptic functions, respectively. These functions are obtained by
inverting the elliptic integral of the first kind where
φ
dθ
u = F (φ, k) = ! (3.51)
0 1 − k2 sin2 θ
where 0 < k2 < 1 and k is referred to as the elliptic modulus of u and φ, the
upper bound on the elliptic integral is referred to as the Jacobi amplitude (amp).
The inversion of the elliptic integral gives
dn(u, 0) = 1 (3.58)
14
1
dn u
sn u
0.5
snu 0
K 2K 4K
0 3K
0.5 cn u
1
0 1 2 3 4 5 6 7
u
Figure 3.1: Plot of Jacobian Elliptic Functions sn(u), cn(u) and dn(u) for k = 1/2
In total there are 12 Jacobian elliptic functions, where the remaining 9 can be
related to the 3 we have already seen as follows:
cn(u) dn(u) 1
cd(u) = dc(u) = ns(u) =
dn(u) cn(u) sn(u)
sn(u) 1 dn(u)
sd(u) = nc(u) = ds(u) =
dn(u) cn(u) sn(u)
1 sn(u) cn(u)
nd(u) = sc(u) = cs(u) =
dn(u) cn(u) sn(u)
d2 x
= A + Bx + Cx2 + Dx3
dt2
the Weierstrass elliptic function has just one double pole and is a solution to
d2 x
= A + Bx + Cx2
dt2
15
We will focus primarily on the Jacobi elliptic function in this course but you should be
aware of the Weierstrass form of the function.
3. Theta Functions
Theta functions are the elliptic analogs of the exponential function and are typically
written as, θa (u, q) where a ranges from 1 to 4 to represent the fours variations of
the theta function, u is the argument of the function and q is the Nome, given as
q = eiπt = eπK /K (3.59)
where
K (k)
t = −i
K(k)
16
Assigned Problems
Problem Set for Elliptic Integrals and Functions
4x2 + 9y 2 = 36
y2
x2 + =1
4
√
between (0, 2) and (1/2, 3). Note that b > a in this problem. Compute the arc
length to three decimal places.
2
dx 1 2
! = K
0 (4 − x2 )(9 − x2 ) 3 3
5. Prove that
π/2
dx π/2
dx √ 1
√ = √ = 2K √
0 sin x 0 cos x 2
17
6. Given q = 1/2, compute k, K and K to six decimal places.
7. Compute to three decimal places the area of the ellipsoid with semi-axes 3, 2, and 1.
#
2
b
R∗ = kaR = K 1 −
a
Compute R∗ to four decimal places for the following values of a/b : 1, 1.5, 2, 3, 4 and
5. Use the arithmetic-geometric mean method of Gauss.
dE E−K
i) =
dk k
d2 E 1 dK E − (k )2 K
ii) =− =−
dk2 k dk k2 (k )2
πk
∞
[(2n)!]2 k2n
i) K dk = 1+
2 n=1
(2n + 1)24n(n!)4
ii) kKL dk = E − (k )2 K
1
iii) k E dk = [(1 + k2 )E − (k )2 K]
3
18
11. Show that
1
kE dk 1
π2
i) = E(u ) du =
0 k 0 8
1
K dk π2
ii) =
0 1+k 8
d 1
i) F(φ, k) = !
dφ 1 − k2 sin2 φ
"
d
ii) E(φ, k) = 1 − k2 sin2 φ
dφ
19
References
1. Abramowitz, M. and Stegun, I.A., (Eds.), “Gamma (Factorial) Function” and
“Incomplete Gamma Function.” §6.1 and §6.5 in Handbook of Mathematical Functions
and Formulas, Graphs and Mathematical Tables, 9th printing, Dover, New York, 1972,
pp. 255-258 and pp 260-263.
2. Anderson, D., Vamanamurthy, K. and Vuorinen, M., “Functional Inequalities
for Complete Elliptic Integrals,” SIAM J. Math. Anal., 21, 1990, pp. 536-549.
3. Anderson, D., Vamanamurthy, K. and Vuorinen, M., “Functional inequalities
for hypergeometric functions and complete elliptic integrals,” SIAM J. Math. Anal.,
23, 1992, pp. 512-524.
4. Arfken, G., Mathematical Methods for Physicists, 3rd ed., Section 5.8, “Elliptic
Integrals,” Orlando, FL, Academic Press, pp.321-327, 1985.
5. Borwein, J.M. and Borwein, P.B. Pi, A Study in Analytic Number Theory and
Computational Complexity, Wiley, New York, 1987.
6. Byrd, P.F. and Friedman, M.D., Handbook of Elliptic Integrals for Engineers and
Scientists, Second ed., Springer-Verlag, New York, 1971.
7. Carlson, B.C., “Some Inequalities for Hypergeometric Functions,” Proc. Amer.
Math. Soc., 17, 1966, pp. 32-39.
8. Carlson, B.C., “Inequalities for a Symmetric Elliptic Integral,” Proc. Amer. Math.
Soc., 25, 1970, pp. 698-703.
9. Carlson, B.C. and Gustafson, J.L., “Asymptotic Expansion of the First Elliptic
Integral,” SIAM J. Math. Anal., 16, 1985, pp. 1072-1092.
10. Cayley, A., Elliptic Functions, Second ed., Dover Publications, New York, 1961.
11. Hancock H., Elliptic Integrals, Dover Publications, New York, 1917.
12. Karman, T. von and Biot, M.A., Mathematical Methods in Engineering: An
Introduction to the Mathematical Treatment of Engineering Problems, McGraw-Hill,
New York, 1940.
13. King, L.V., The Direct Numerical Calculation of Elliptic Functions and Integrals,
Cambridge University Press, London, 1924.
14. Prasolov, V. and Solovyev, Y., Elliptic Functions and Elliptic Integrals, Amer.
Math. Soc., Providence, RI, 1997.
15. Weisstein, E.W., “Books about Elliptic Integrals,”
http://www.ericweisstein.com/encyclopedias/books/EllipticIntegrals.html.
16. Whittaker, E.T. and Watson, G.N. A Course in Modern Analysis, 4th ed., Cam-
bridge University Press, Cambridge, England, 1990.
20
Bessel Functions of the First and Second Kind
Reading Problems
Outline
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
Modified Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Kelvin’s Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Hankel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Orthogonality of Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Assigned Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
1
Background
Bessel functions are named for Friedrich Wilhelm Bessel (1784 - 1846), however, Daniel
Bernoulli is generally credited with being the first to introduce the concept of Bessels func-
tions in 1732. He used the function of zero order as a solution to the problem of an oscillating
chain suspended at one end. In 1764 Leonhard Euler employed Bessel functions of both zero
and integral orders in an analysis of vibrations of a stretched membrane, an investigation
which was further developed by Lord Rayleigh in 1878, where he demonstrated that Bessels
functions are particular cases of Laplaces functions.
Bessel, while receiving named credit for these functions, did not incorporate them into his
work as an astronomer until 1817. The Bessel function was the result of Bessels study of a
problem of Kepler for determining the motion of three bodies moving under mutual gravita-
tion. In 1824, he incorporated Bessel functions in a study of planetary perturbations where
the Bessel functions appear as coefficients in a series expansion of the indirect perturbation
of a planet, that is the motion of the Sun caused by the perturbing body. It was likely
Lagrange’s work on elliptical orbits that first suggested to Bessel to work on the Bessel
functions.
The notation Jz,n was first used by Hansen9 (1843) and subsequently by Schlomilch10 (1857)
and later modified to Jn(2z) by Watson (1922).
Subsequent studies of Bessel functions included the works of Mathews11 in 1895, “A treatise
on Bessel functions and their applications to physics” written in collaboration with Andrew
Gray. It was the first major treatise on Bessel functions in English and covered topics such
as applications of Bessel functions to electricity, hydrodynamics and diffraction. In 1922,
Watson first published his comprehensive examination of Bessel functions “A Treatise on
the Theory of Bessel Functions” 12 .
9
Hansen, P.A. “Ermittelung der absoluten Strungen in Ellipsen von beliebiger Excentricitt und Neigung,
I.” Schriften der Sternwarte Seeberg. Gotha, 1843.
10
Schlmilch, O.X. “Ueber die Bessel’schen Function.” Z. fr Math. u. Phys. 2, 137-165, 1857.
11
George Ballard Mathews, “A Treatise on Bessel Functions and Their Applications to Physics,” 1895
12
G. N. Watson , “A Treatise on the Theory of Bessel Functions,” Cambridge University Press, 1922.
2
Definitions
1. Bessel Equation
The second order differential equation given as
d2 y dy
x 2
+x + (x2 − ν 2 )y = 0
dx2 dx
is known as Bessel’s equation. Where the solution to Bessel’s equation yields Bessel functions
of the first and second kind as follows:
y = A Jν (x) + B Yν (x)
where A and B are arbitrary constants. While Bessel functions are often presented in text
books and tables in the form of integer order, i.e. ν = 0, 1, 2, . . . , in fact they are defined
for all real values of −∞ < ν < ∞.
2. Bessel Functions
a) First Kind: Jν (x) in the solution to Bessel’s equation is referred to as a Bessel
function of the first kind.
b) Third Kind: The Hankel function or Bessel function of the third kind can be
written as
Because of the linear independence of the Bessel function of the first and second
kind, the Hankel functions provide an alternative pair of solutions to the Bessel
differential equation.
3
3. Modified Bessel Equation
√
By letting x = i x (where i = −1) in the Bessel equation we can obtain the modified
Bessel equation of order ν, given as
d2 y dy
x 2
+x − (x2 + ν 2 )y = 0
dx2 dx
The solution to the modified Bessel equation yields modified Bessel functions of the first and
second kind as follows:
b) Second Kind: Kν (x) in the solution to the modified Bessel’s equation is re-
ferred to as a modified Bessel function of the second kind or sometimes the
Weber function or the Neumann function.
5. Kelvin’s Functions
A more general form of Bessel’s modified equation can be written as
d2 y dy
x2 +x − (β 2 x2 + ν 2 )y = 0
dx2 dx
y = C Iν (βx) + D Kν (βx)
If we let
√
β2 = i where i= −1
4
and we note
The Kelvin functions are obtained from the real and imaginary portions of this solution as
follows:
berν = Re Jν (i3/2 x)
beiν = Im Jν (i3/2 x)
5
Theory
Bessel Functions
Bessel’s differential equation, given as
d2 y dy
x2 +x + (x2 − ν 2 )y = 0
dx2 dx
is often encountered when solving boundary value problems, such as separable solutions
to Laplace’s equation or the Helmholtz equation, especially when working in cylindrical or
spherical coordinates. The constant ν, determines the order of the Bessel functions found in
the solution to Bessel’s differential equation and can take on any real numbered value. For
cylindrical problems the order of the Bessel function is an integer value (ν = n) while for
spherical problems the order is of half integer value (ν = n + 1/2).
Since Bessel’s differential equation is a second-order equation, there must be two linearly
independent solutions. Typically the general solution is given as:
1. Bessel functions of the first kind, Jν (x), which are finite at x = 0 for all real values
of ν
2. Bessel functions of the second kind, Yν (x), (also known as Weber or Neumann func-
tions) which are singular at x = 0
The Bessel function of the first kind of order ν can be be determined using an infinite power
series expansion as follows:
∞
(−1)k (x/2)ν+2k
Jν (x) =
k=0
k!Γ(ν + k + 1)
ν
1 x (x/2)2 (x/2)2 (x/2)2
= 1− 1− 1− (1 − · · ·
Γ(1 + ν) 2 1(1 + ν) 2(2 + ν) 3(3 + ν)
6
1
0.8 J0
0.6 J1
J2
Jnx 0.4
0.2
0
-0.2
-0.4
0 2 4 6 8 10 12 14
x
Figure 4.1: Plot of the Bessel Functions of the First Kind, Integer Order
∞
(−1)k (x/2)ν+2k
Jν (x) =
k=0
k!(ν + k)!
Bessel Functions of the first kind of order 0, 1, 2 are shown in Fig. 4.1.
The Bessel function of the second kind, Yν (x) is sometimes referred to as a Weber function
or a Neumann function (which can be denoted as Nν (x)). It is related to the Bessel function
of the first kind as follows:
7
in which case Yν is needed to provide the second linearly independent solution of Bessel’s
equation. In contrast, for non-integer orders, Jν and J−ν are linearly independent and Yν
is redundant.
The Bessel function of the second kind of order ν can be expressed in terms of the Bessel
function of the first kind as follows:
2k−ν
1 (ν − k − 1)!
ν−1
2 x x
Yν (x) = Jν (x) ln +γ − +
π 2 π k=0 k! 2
1 1 1 1
(−1)k−1
1 + + ··· + + 1 + + ··· + 2k+ν
1
∞
2 k 2 k+ν x
+
π k=0 k!(k + ν)! 2
Bessel Functions of the second kind of order 0, 1, 2 are shown in Fig. 4.2.
1
Y0 Y1
0.5 Y2
0
Ynx
-0.5
-1
-1.5
0 2 4 6 8 10 12 14
x
Figure 4.2: Plot of the Bessel Functions of the Second Kind, Integer Order
8
Relations Satisfied by the Bessel Function
Recurrence Formulas
Bessel functions of higher order be expressed by Bessel functions of lower orders for all real
values of ν.
2ν 2ν
Jν+1 (x) = Jν (x) − Jν−1 (x) Yν+1 (x) = Yν (x) − Yν−1 (x)
x x
1
1
Jν+1 (x) = [Jν−1 (x) − Jν+1 (x)] Yν+1 (x) = [Yν−1 (x) − Yν+1 (x)]
2 2
ν ν
Jν (x) = Jν−1 (x) − Jν (x) Yν (x) = Yν−1 (x) − Yν (x)
x x
ν ν
Jν (x) = Jν (x) − Jν+1 (x) Yν (x) = Yν (x) − Yν+1 (x)
x x
d d
[xν Jν (x)] = xν Jν−1 (x) [xν Yν (x)] = xν Yν−1 (x)
dx dx
d −ν
d −ν
x Jν (x) = −x−ν Jν+1 (x) x Yν (x) = −x−ν Yν+1 (x)
dx dx
First Kind
π π
1 1
Jn(x) = cos(nθ − x sin θ) dθ = cos(x sin θ − nθ) dθ
π 0 π 0
π π
1 1
J0 (x) = cos(x sin θ) dθ = cos(x cos θ) dθ
π 0 π 0
π π
1 1
J1 (x) = cos(θ − x sin θ) dθ = cos(x sin θ − θ) dθ
π 0 π 0
π
1
= cos θ sin(x cos θ) dθ
π 0
9
from Bowman, pg. 57
π/2
2
J0 (x) = cos(x sin θ) dθ
π 0
π/2
2
= cos(x cos θ) dθ
π 0
∞
2(x/2)−n cos(xt) dt
Yn(x) = − x>0
√ 1 1 (t − 1)
2 n+1/2
πΓ −n
2
1 π 1 π nt
Yn(x) = sin(x sin θ − nθ) dθ − e + e−nt cos(nπ) exp(−x sinh t) dt
π 0 π 0
x>0
4 π/2
Y0 (x) = cos(x cos θ) γ + ln(2x sin2 θ) dθ x>0
π2 0
∞
2
Y0 (x) = − cos(x cosh t) dt x>0
π 0
Approximations
For x ≥ 2 one can use the following approximation based upon asymptotic expansions:
1/2
2
Jn(x) = [Pn(x) cos u − Qn(x) sin u]
πx
π
where u ≡ x − (2n + 1) and the polynomials Pn(x) and Qn(x) are given by
4
10
(4n2 − 12 )(4n2 − 32 ) (4n2 − 52 )(4n2 − 72 )
Pn(x) = 1 − 1−
2 · 1(8x)2 4 · 3(8x)2
(4n2 − 92 )(4n2 − 112 )
1− (1 − · · · )
6 · 5(8x)2
and
4n2 − 12 (4n2 − 32 )(4n2 − 52 ) (4n2 − 72 )(4n2 − 92 )
Qn(x) = 1− 1−
1!(8x) 3 · 2(8x)2 5 · 4(8x)2
(1 − · · · )
For n = 0
1
sin u = √ (sin x − cos x)
2
1
cos u = √ (sin x + cos x)
2
1
J0 (x) = √ [P0 (x)(sin x + cos x) − Q0 (x)(sin x − cos x)]
πx
11
or
1/2
2 π π
J0 (x) = P0 (x) cos x − − Q0 (x) sin x −
πx 4 4
12 · 32 52 · 72 92 · 112
P0 (x) = 1 − 1− 1− (1 − · · · )
2!(8x)2 4 · 3(8x)2 6 · 5(8x)2
12 32 · 52 72 · 92
Q0 (x) = − 1− 1− (1 − · · · )
8x 3 · 2(8x)2 5 · 4(8x)2
For n = 1
1
sin u = √ (sin x + cos x)
2
1
cos u = √ (sin x − cos x)
2
1
J1 (x) = √ [P1 (x)(sin x − cos x) − Q1 (x)(sin x + cos x)]
πx
or
1/2
2 3π 3π
J1 (x) = P1 (x) cos x − − Q1 (x) sin x −
πx 4 4
3·5 21 · 45 77 · 117
P1 (x) = 1 + 1− 1− (1 − · · · )
2 · 1(8x)2 4 · 3(8x)2 6 · 5(8x)2
3 35 45 · 77
Q1 (x) = 1− 1− (1 − · · · )
8x 2 · 1(8x)2 5 · 4(8x)2
12
Asymptotic Approximation of Bessel Functions
Large Values of x
1/2
2
Y0 (x) = [P0 (x) sin(x − π/4) + Q0 (x) cos(x − π/4)]
πx
1/2
2
Y1 (x) = [P1 (x) sin(x − 3π/4) + Q1 (x) cos(x − 3π/4)]
πx
∞
(−1)k (x/2)ν+2k
Jν (x) =
k=0
k!Γ(ν + k + 1)
ν
1 x (x/2)2 (x/2)2 (x/2)2
= 1− 1− 1− (1 − · · ·
Γ(1 + ν) 2 1(1 + ν) 2(2 + ν) 3(3 + ν)
−Y
Zk = k = 1, 2, 3, . . .
k(k + ν)
Y = (x/2)2
where
B0 = 1
B + 1 = Z1 · B0
B2 = Z2 · B1
..
.
Bk = Zk · Bk−1
13
The approximation can be written as
(x/2)ν
U
Jν (x) = Bk
Γ(1 + ν k=0
∞
(−1)k (x/2)2k−ν
J−ν (x) =
k=0
k!Γ(k + 1 − ν)
−ν
1 x (x/2)2 (x/2)2 (x/2)2
= 1− 1− 1− (1 − · · ·
Γ(1 − ν) 2 1(1 − ν) 2(2 − ν) 3(3 − ν)
−Y
Zk = k = 1, 2, 3, . . .
k(k − ν)
Y = (x/2)2
where
B0 = 1
B + 1 = Z1 · B0
B2 = Z2 · B1
..
.
Bk = Zk · Bk−1
(x/2)−ν
U
J−ν (x) = Bk
Γ(1 − ν) k=0
where U is some arbitrary value for the upper limit of the summation.
14
Second Kind, Positive Order
Note: xn+1 − xn → π as n → ∞
The roots of J0 (x) can be computed approximately by Stokes’s approximation which was
developed for large n
α 2 62 15116 12554474 8368654292
xn = 1+ 2 − + − + − ...
4 α 3α4 15α6 105α8 315α10
α 2 62 7558
xn = 1+ 2 − +
4 α 3α4 15α6
15
Note: xn+1 − xn → π as n → ∞
These roots can also be computed using Stoke’s approximation which was developed for large
n.
β 6 6 4716 3902418 895167324
xn = 1+ 2 + 4 − + − + ...
4 β β 5β 6 35β 8 35β 10
β 6 6 4716
xn = 1− 2 + 4 −
4 β β 10β 6
with 0 ≤ B < ∞ are infinite in number and they can be computed accurately and
efficiently using the Newton-Raphson method. Thus the (i + 1)th iteration is given by
Accurate polynomial approximations of the Bessel functions J0 (·) and J1 (·) may be em-
ployed. To accelerate the convergence of the Newton-Raphson method, the first value for
the (n + 1)th root can be related to the converged value of the nth root plus π.
Aside:
Fisher-Yovanovich modified the Stoke’s approximation for roots of J0 (x) = 0 and J1 (x) =
0. It is based on taking the arithmetic average of the first three and four term expressions
α 2 62 15116
δn,∞ = 1+ − +
4 α2 3α4 30α6
16
with α = π(4n − 1).
β 6 6 4716
δn,0 = 1− + −
4 β2 β4 10β 6
Potential Applications
1. problems involving electric fields, vibrations, heat conduction, optical diffraction plus
others involving cylindrical or spherical symmetry
17
Modified Bessel Functions
Bessel’s equation and its solution is valid for complex arguments of √
x. Through a simple
change of variable in Bessel’s equation, from x to ix (where i = −1), we obtain the
modified Bessel’s equation as follows:
d2 y dy
x 2
+x + ((ix)2 − ν 2 )y = 0
dx2 dx
or equivalently
d2 y dy
x 2
+x − (x2 + ν 2 )y = 0
dx2 dx
The last equation is the so-called modified Bessel equation of order ν. Its solution is
or
and Iν (x) and Kν (x) are the modified Bessel functions of the first and second kind of order
ν.
Unlike the ordinary Bessel functions, which are oscillating, Iν (x) and Kν (x) are exponen-
tially growing and decaying functions as shown in Figs. 4.3 and 4.4.
It should be noted that the modified Bessel function of the First Kind of order 0 has a value
of 1 at x = 0 while for all other orders of ν > 0 the value of the modified Bessel function
is 0 at x = 0. The modified Bessel function of the Second Kind diverges for all orders at
x = 0.
18
2.5
1.5
Ix
I0
1 I1
0.5 I2
Figure 4.3: Plot of the Modified Bessel Functions of the First Kind, Integer Order
2.5
1.5 K2
Kx
1 K1
K0
0.5
Figure 4.4: Plot of the Modified Bessel Functions of the Second Kind, Integer Order
19
Relations Satisfied by the Modified Bessel Function
Recurrence Formulas
Bessel functions of higher order can be expressed by Bessel functions of lower orders for all
real values of ν.
2ν 2ν
Iν+1 (x) = Iν−1 (x) − Iν (x) Kν+1 (x) = Kν−1 (x) + Kν (x)
x x
1 1
Iν (x) = [Iν−1 (x) + Iν+1 (x)] Kν (x) = − [Kν−1 (x) + Kν+1 (x)]
2 2
ν ν
Iν (x) = Iν−1 (x) − Iν (x) Kν (x) = −Kν−1 (x) − Kν (x)
x x
ν ν
Iν (x) = Iν (x) + Iν+1 (x) Kν (x) = Kν (x) − Kν+1 (x)
x x
d d
[xν Iν (x)] = xν Iν−1 (x) [xν Kν (x)] = −xν Kν−1 (x)
dx dx
d −ν
d −ν
x Iν (x) = x−ν Iν+1 (x) x Kν (x) = −x−ν Kν+1 (x)
dx dx
First Kind
π
1
In(x) = cos(nθ) exp(x cos θ) dθ
π 0
π
1
I0 (x) = exp(x cos θ) dθ
π 0
π
1
I1 (x) = cos(θ) exp(x cos θ) dθ
π 0
20
Alternate Integral Representation of I0 (x) and I1 (x)
π
1
I0 (x) = cosh(x cos θ) dθ
π 0
π
dI0 (x) 1
I1 (x) = = sinh(x cos θ) cos θ dθ
dx π 0
Second Kind
√
π(x/2)n ∞
Kn(x) = sinh2n t exp(−x cosh t) dt x>0
1 0
Γ n+
2
∞
Kn(x) = cosh(nt) exp(−x cosh t) dt x>0
0
∞
K0 (x) = exp(−x cosh t) dt x>0
0
∞
K1 (x) = cosh t exp(−x cosh t) dt x>0
0
21
Approximations
Asymptotic Approximation of Modified Bessel Functions
ex 4n2 − 12 (4n2 − 32 ) (4n2 − 52 )
In(x) = √ 1− 1− 1− (1 − . . . )
2πx 1(8x) 2(8x) 3(8x)
ex 1 9 25
I0 (x) = √ 1+ 1+ 1+ (1 + . . . )
2πx 8x 2(8x) 3(8x)
ex 3 5 21
I1 (x) = √ 1− 1+ 1+ (1 + . . . )
2πx 8x 2(8x) 3(8x)
∞
(x/2)ν+2k
Iν (x) =
k=0
k!Γ(ν + k + 1)
ν
1 x (x/2)2 (x/2)2 (x/2)2
= 1+ 1+ 1+ (1 + · · ·
Γ(1 + ν) 2 1(1 + ν) 2(2 + ν) 3(3 + ν)
Y
Zk = k = 1, 2, 3, . . .
k(k + ν)
Y = (x/2)2
22
where
B0 = 1
B + 1 = Z1 · B0
B2 = Z2 · B1
..
.
Bk = Zk · Bk−1
(x/2)ν
U
Iν (x) = Bk
Γ(1 + ν) k=0
where U is some arbitrary value for the upper limit of the summation.
∞
(x/2)2k−ν
I−ν (x) =
k=0
k!Γ(k + 1 − ν)
−ν
1 x (x/2)2 (x/2)2 (x/2)2
= 1+ 1+ 1+ (1 + · · ·
Γ(1 − ν) 2 1(1 − ν) 2(2 − ν) 3(3 − ν)
Y
Zk = k = 1, 2, 3, . . .
k(k − ν)
Y = (x/2)2
23
where
B0 = 1
B + 1 = Z1 · B0
B2 = Z2 · B1
..
.
Bk = Zk · Bk−1
(x/2)−ν
U
I−ν (x) = Bk
Γ(1 − ν) k=0
where U is some arbitrary value for the upper limit of the summation.
First Kind
zn z2 z2 z2 z2
In(x) = 1+ 1+1+ 1+ (1 + . . . )
n! 1(n + 1)
2(n + 2) 3(n + 3) 4(n + 4)
2
z2 z2 z2 z2
I0 (x) = 1 + z 1 + 1+ 1+ 1+ (1 + . . . )
2·2 3·3 4·4 5·5
z2 z2 z2 z2
I1 (x) = z 1 + 1+ 1+ 1+ (1 + . . . )
1·2 2·3 3·4 4·5
24
Second Kind
1/2
π −x
(4n2 − 12 ) (4n2 − 32 ) (4n2 − 52 )
Kn(x) = e 1+ 1+ 1+ (1 + . . . )
2x 1(8x) 2(8x) 3(8x)
1/2
π −x
1 9 25
K0 (x) = e 1− 1− 1−
2x 8x 2(8x) 3(8x)
1/2
π −x
3 5 21
K1 (x) = e 1+ 1− 1−
2x 8x 2(8x) 3(8x)
Series expansions based upon the trapezoidal rule applied to certain forms of the integral
representation of the Bessel functions can be developed for any desired accuracy. Several
expansions are given below.
7
15J0 (x) = cos x + 2 cos(x cos jπ/15)
j=1
7
15J1 (x) = sin x + 2 sin(x cos jπ/15) cos(jπ/15)
j=1
7
15I0 (x) = cosh x + 2 cosh(x cos jπ/15)
j=1
7
15I1 (x) = sinh x + 2 sinh(x cos jπ/15) cos(jπ/15)
j=1
25
For x ≥ 0.1, 8 decimal place accuracy is obtained by
11
4K0 (x) = e−x + e−x cosh 6 + 2 exp[−x cosh(j/2)]
j=1
11
−x −x cosh 6
4K1 (x) = e + cosh 6e +2 exp[−x cosh(j/2) cosh(j/2)]
j=1
Potential Applications
26
Kelvin’s Functions
Consider the differential equation
d2 y dy √
x2 +x − (ik2 x2 + n2 )y = 0 i= −1
dx2 dx
d2 y dy √
x2 +x − (β 2 x2 + n2 )y = 0 i= −1
dx2 dx
y = AIn(βx) + BKn(βx)
√ √
y = AIn( i kx) + BKn( i kx)
Also, since
when x is real, Jn(i3/2 x) and Kn(i1/2 x) are not necessarily real. We obtain real functions
27
by the following definitions:
bern = Re Jn(i3/2 x)
bein = Im Jn(i3/2 x)
kern = Re i−nKn(i1/2 x)
kein = Im i−nKn(i1/2 x)
It is, however, customary to omit the subscript from the latter definitions when the order n
is zero and to write simply
The complex function ber x + i bei x is often expressed in terms of its modulus and its
amplitude:
where
bei x
M0 (x) = [(ber x)2 + (bei x)2 ]1/2 , θ0 = arc tan
ber x
where
28
bein x
Mn(x) = [(bern x)2 + (bein x)2 ]1/2 , θn = arc tan
bern x
d2 y dy
t 2
+t − t2 y = 0
dt2 dt
√
Set t = x i and the equation becomes
d2 y dy
x2 +x − i x2 y = 0
dx2 dx
√ √
with the solutions I0 (x i) and K0 (x i). The ber and bei functions are defined as follows.
Since
2
t (t/2)2 (t/2)6
I0 (t) = 1 + + + + ···
2 (2!)2 (3!)2
(x/2)4 (x/2)8
ber x = 1 − + − ···
(2!)2 (4!)2
(x/2)6 (x/2)10
bei x = (x/2) − 2
+ − ···
(3!)2 (5!)2
29
Both ber x and bei x are real for real x, and it can be seen that both series are absolutely
convergent for all values of x. Among the more obvious properties are
ber 0 = 1 bei 0 = 0
and
x x
x ber x dx = x bei x, x bei x dx = −x ber x
0 0
In a similar manner the functions ker x and kei x√are defined to be respectively the real
and imaginary parts of the complex function K0 (x i), namely
√
ker x + i kei x = K0 (x i)
π
∞
(−1)r (x/2)4r
ker x = −[ln(x/2) + δ] ber x + bei x + φ(2r)
4 r=1
[(2r)!]2
and
π
∞
(−1)r (x/2)4r+2
kei x = −[ln(x/2) + δ] bei x − ber x + φ(2r + 1)
4 r=1
[(2r + 1)!]2
where
r
1
φ(r) =
s=1
s
30
Potential Applications
31
Hankel Functions
We can define two new linearly dependent functions
which are obviously solutions of Bessel’s equation and therefore the general solution can be
written as
where A and B are arbitrary constants. The functions Hn(1) (x) and Hn(2) (x) are called
Hankel’s Bessel functions of the third kind. Both are, of course, infinite at x = 0, their
usefulness is connected with their behavior for large values of x.
Since Hankel functions are linear combinations of Jn and Yn, they satisfy the same recurrence
relationships.
32
Orthogonality of Bessel Functions
Let u = Jn(λx) and v = Jn(µx) with λ = µ be two solutions of the Bessel equations
and
x2 v + xv + (µ2 x2 − n2 )v = 0
Multiplying the first equation by v and the second by u, and subtracting, we obtain
Division by x gives
or
d
[x(vu − uv )] = (µ2 − λ2 )xuv
dx
(µ − λ )
2 2
xuv dx = x(vu − uv )
(µ − λ )
2 2
xJn(λx) Jn(µx) dx = x[Jn(µx) Jn (λx) − Jn(λx) Jn (µx)]
33
This integral is the so-called Lommel integral. The right hand side vanishes at the lower
limit zero. It also vanishes at some arbitrary upper limit, say x = b, provided
Jn(µb) = 0 = Jn(λb)
or
In the first case this means that µb and λb are two roots of Jn(x) = 0, and in the second
case it means that µb and λb are two roots of Jn (x) = 0. In either case we have the
following orthogonality property
b
xJn(λx)Jn(µx) dx = 0
a
This property is useful in Bessel-Fourier expansions of some arbitrary function f (x) over
the finite interval 0 ≤ x ≤ b. Further the functions Jn(λx) and Jn(µx) are said to be
orthogonal in the interval 0 ≤ x ≤ b with respect to the weight function x.
x
x2
2 n2 2
xJn2 (λx) dx = Jn(λx) + 1 − [Jn(λx)]
0 2 (λx)2
where
dJn(r)
Jn (λx) = with r = λx
dr
34
Assigned Problems
a) J0 (x) e) Y0 (x)
b) J1 (y) f) Y1 (y)
c) I0 (z) g) K0 (z)
d) I1 (z) h) K1 (z)
given
x = 3.83171
y = 2.40482
z = 1.75755
2. Compute to 6 decimal places the first six roots xn of the transcendental equation
2B
An =
(x2n + B 2 ) J0 (xn)
for B = 0.1, 1.0, 10, and 100. The xn are the roots found in Problem 2.
35
4. Compute to 4 decimal places the coefficients Bn(n = 1, 2, 3, 4) given
2AnJ1 (xn)
Bn =
xn
for B = 0.1, 1.0, 10, and 100. The xn are the roots found in Problem 2 and the
An are the coefficients found in Problem 3.
√
1 I2/3 (4 γ/3)
η= √
γ I−2/3 (4 γ/3)
√
1 I1 (2 γ)
η= √
γ I0 (2 γ)
2ρ I1 (x)K1 (ρx) − K1 (x)I1 (ρx)
η=
x(1 − ρ2 ) I0 (ρx)K1 (x) + I1 (x)K0 (ρx)
8. Show that
d
ii) [xJ1 (x)] = xJ0 (x)
dx
36
9. Given the function
determine f , f , and f . Reduce all expressions to functions of J0 (x), J1 (x) and
B only.
ii) x3 J0 (x) dx = x(x2 − 4) J1 (x) + 2x2 J0 (x)
1
xJ0 (δx) dx = 0
0
37
14. Given the Fourier-Bessel expansion of f (x) of zero order over the interval 0 ≤ x ≤ 1
where δn are the roots of the equation J0 (x) = 0. Determine the coefficients An
when f (x) = 1 − x2 .
∞
J1 (δn)
x=2
n=1
δn J2 (δn)
16. Obtain the solution to the following second order ordinary differential equations:
i) y + xy = 0
ii) y + 4x2 y = 0
iv) xy + y + k2 y = 0 k>0
1
v) x2 y + x2 y + y=0
4
1
4
vi) y + y − 1+ y=0
x x2
vii) xy + 2y + xy = 0
xy + y − m2 by = 0 0 ≤ x ≤ b, m > 0
with
38
18. Obtain the solution for the following problem:
x2 y + 2xy − m2 xy = 0 0 ≤ x ≤ b, m > 0
with
1/2
2 sinh x
I3/2 (x) = cosh x −
πx x
1/2
2 cosh x
I−3/2 (x) = sinh x −
πx x
39
Selected References
6. McLachlan, N.W., Bessel Functions for Engineers, 2nd Edition, Oxford University
Press, London, 1955.
9. Sneddon, I.N., Special Functions of Mathematical Physics and Chemistry, 2nd Edi-
tion, Oliver and Boyd, Edinburgh, 1961.
10. Watson, G.N., A Treatise on the Theory of Bessel Functions, 2nd Edition, Cambridge
University Press, London, 1931.
11. Wheelon, A. D., Tables of Summable Series and Integrals Involving Bessel Functions,
Holden-Day, San Francisco, 1968.
40
Legendre Polynomials and Functions
Reading Problems
Outline
Background and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Assigned Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1
Background and Definitions
The ordinary differential equation referred to as Legendre’s differential equation is frequently
encountered in physics and engineering. In particular, it occurs when solving Laplace’s
equation in spherical coordinates.
Adrien-Marie Legendre (September 18, 1752 - January 10, 1833) began using, what are now
referred to as Legendre polynomials in 1784 while studying the attraction of spheroids and
ellipsoids. His work was important for geodesy.
d2 y dy
(1 − x2 ) − 2x + n(n + 1) y = 0 n > 0, |x| < 1
dx2 dx
is known as Legendre’s equation. The general solution to this equation is given as a function
of two Legendre functions as follows
where
1 dn
Pn(x) = (x2 − 1)n Legendre function of the first kind
2nn! dxn
1 1+x
Qn(x) = Pn(x) ln Legendre function of the second kind
2 1−x
d2 y dy m2
(1 − x )2
− 2x + n(n + 1) − y=0
dx2 dx 1 − x2
2
If we set m = 0 in this equation the differential equation reduces to Legendre’s equation.
y = A Pm m
n (x) + B Qn (x)
where Pm m
n (x) and Qn (x) are called the associated Legendre functions of the first and second
kind given as
dm
n (x) = (1 − x )
Pm 2 m/2
Pn(x)
dxm
dm
Qm
n (x) = (1 − x ) 2 m/2
Qn(x)
dxm
3
Legendre’s Equation and Its Solutions
Legendre’s differential equations is
d2 y dy
(1 − x2 ) − 2x + n(n + 1) y = 0 n > 0, |x| < 1
dx2 dx
or equivalently
d dy
(1 − x )
2
+ n(n + 1) y = 0 n > 0, |x| < 1
dx dx
Solutions of this equation are called Legendre functions of order n. The general solution can
be expressed as
where Pn(x) and Qn(x) are Legendre Functions of the first and second kind of order n.
1 dn
Pn(x) = (x2 − 1)n
2nn! dxn
Legendre functions of the first kind (Pn(x) and second kind (Qn(x) of order n = 0, 1, 2, 3
are shown in the following two plots
4
The first several Legendre polynomials are listed below
1
P0 (x) = 1 P3 (x) = (5x3 − 3x)
2
1
P1 (x) = x P3 (x) = (35x4 − 30x2 + 3)
8
1 1
P2 (x) = (3x2 − 1) P3 (x) = (63x5 − 70x3 + 15x)
2 8
2n + 1 n
Pn+1 (x) = xPn(x) − Pn−1 (x)
n+1 n+1
can be used to obtain higher order polynomials. In all cases Pn(1) = 1 and
Pn(−1) = (−1)n
1
Pm(x) Pn(x) dx = 0 m = n
−1
1
2
[Pn(x)]2 dx = m=n
−1 2n + 1
5
1
P0
P2 P1
0.5
P3
Px
-0.5
-1 -0.5 0 0.5 1
x
1
0.75 Q2 Q3 Q0
0.5
0.25
Qx
0
-0.25
-0.5
-0.75 Q1
-1 -0.5 0 0.5 1
x
6
Orthogonal Series of Legendre Polynomials
Any function f (x) which is finite and single-valued in the interval −1 ≤ x ≤ 1, and which
has a finite number or discontinuities within this interval can be expressed as a series of
Legendre polynomials.
We let
1
∞ 1
f (x)Pm(x) dx = An Pm(x)Pn(x) dx
−1 n=0 −1
1
2n + 1
An = f (x)Pn(x) dx n = 0, 1, 2, 3 . . .
2 −1
Since Pn(x) is an even function of x when n is even, and an odd function when n is odd, it
follows that if f (x) is an even function of x the coefficients An will vanish when n is odd;
whereas if f (x) is an odd function of x, the coefficients An will vanish when n is even.
7
When x = cos θ the function f (θ) can be written
∞
f (θ) = AnPn(cos θ) 0≤θ≤π
n=0
where
π
2n + 1
An = f (θ)Pn(cos θ) sin θ dθ n = 0, 1, 2, 3 . . .
2 0
(−1)nΓ(n + 1/2)
P2n(0) = √ P2n+1 (0) = 0
πΓ(n + 1)
(−1)n2Γ(n + 3/2)
P2n(0) = 0 P2n+1 (0) = √
πΓ(n + 1)
n(n + 1) n(n + 1)
Pn(1) = Pn(−1) = (−1)n−1
2 2
|Pn(x)| ≤ 1
dPn(x)
Pn(1) = at x = 1
dx
8
Generating Function for Legendre Polynomials
If A is a fixed point with coordinates (x1 , y1 , z1 ) and P is the variable point (x, y, z) and
the distance AP is denoted by R, we have
R2 = (x − x1 )2 + (y − y1 )2 + (z − z1 )2
From the theory of Newtonian potential we know that the potential at the point P due to
a unit mass situated at the point A is given by
C
φ=
R
where C is some constant. It can be shown that this function is a solution of Laplace’s
equation.
z
A(x,y,z)
R
OA AP
B(x,y,z) a
r
OB y O
a = OA
r = OB
C C
φ = = √
R r2 + a2 − 2 cos−1 θ
9
Through substitution we can write
C
−1/2
φ= 1 − 2xt + t2
r
where
a
t= , x = cos θ
r
Therefore
C
φ≡ g(x, t)
r
and OP
We introduce the angle θ between the vectors OA and write
R2 = r 2 + a2 − 2 cos−1 θ
where a = |OA|. If we let r/R = t and x = cos θ, then
is defined as the generating function for Pn(x). Expanding by the binomial expansion we
have
∞
1 (2xt − t2 )n
g(x, t) = n
n=0
2 n!
(α)0 = 1
10
(α)n is referred to as the Pochammer symbol and (α, n) is the Appel’s symbol.
Thus we have
∞
(1/2)n n!(2x)n−k tn−k (−t2 )k
n
g(x, t) =
n=0
n! k=0
k!(n − k)!
∞
n/2
(−1) (2n − 2k)!x
k n−2k
g(x, t) = (1 − 2xt + t2 )−1/2 = tn
n=0 k=0
2 k!(n − 2k)!(n − k)!
n
∞
2 −1/2
g(x, t) = (1 − 2xt + t ) = Pn(x)tn |x| ≤ 1, |t| < 1
n=0
1 1+x
Qn(x) = Pn(x) ln = Wn−1 (x)
2 1−x
where
n
1
Wn−1 (x) = Pm−1 (x)Pn−m(x)
m=1
m
is a polynomial of the (n − 1) degree. The first term of Qn(x) has logarithmic singularities
at x = ±1 or θ = 0 and π.
11
The first few polynomials are listed below
1 1+x
Q0 (x) = ln
2 1−x
Q1 (x) = P1 (x)Q0 (x) − 1
3
Q2 (x) = P2 (x)Q0 (x) − x
2
5 2
Q3 (x) = P3 (x)Q0 (x) − x2 +
2 3
The higher order polynomials Qn(x) can be obtained by means of recurrence formulas
exactly analogous to those for Pn(x).
Numerous relations involving the Legendre functions can be derived by means of complex
variable theory. One such relation is an integral relation of Qn(x)
∞ ! −n−1
Qn(x) = x+ x2 − 1 cosh θ dθ |x| > 1
0
t−x ∞
(1 − 2xt + t2 )−1/2 cosh−1 √ = Qn(x)tn
x −1
2
n=0
12
Legendre’s Associated Differential Equation
The differential equation
d2 y dy m2
(1 − x )
2
− 2x + n(n + 1) − y=0
dx2 dx 1 − x2
y = A Pm m
n (x) + B Qn (x)
where Pm m
n (x) and Qn (x) are called the associated Legendre functions of the first and second
kind respectively. They are given in terms of ordinary Legendre functions.
dm
Pm
n (x) = (1 − x ) 2 m/2
Pn(x)
dxm
dm
n (x) = (1 − x )
Qm 2 m/2
Qn(x)
dxm
The Pm n (x) functions are bounded within the interval −1 ≤ x ≤ 1 whereas Qn (x)
m
(1 − x2 )m/2 dm+n
Pm
n (x) = (x2 − 1)n = 0 m>n
2nn! dxm+n
3
P1 (x) = (1 − x2 )1/2 P3 (x) = (5x2 − 1)(1 − x2 )1/2
2
13
Other associated Legendre functions can be obtained by the recurrence formulas.
(n + 1 − m)Pm
n+1 (x) = (2n + 1)xPn (x) − (n + m)Pn−1 (x)
m m
2(m + 1)
Pm+2 (x) = x Pm+1 − (n − m)(n + m + 1)Pm
n (x)
n
(1 − x2 )1/2 n
Orthogonality of Pm
n (x)
1
Pm m
n (x)Pk (x) dx = 0 n = k
−1
and also
1 m
2 2 (n + m)!
Pn (x) dx =
−1 2n + 1 (n − m)!
f (x) = AmPm m m
1 (x) + Am+1 Pm+1 (x) + Am+2 Pm+2 (x) + . . .
14
where the coefficients are determined by means of
2k + 1 (k − m)! 1
Ak = f (x)Pm
k (x) dx
2 (k + m)! −1
15
Assigned Problems
1 dn 2
Pn(x) = (x − 1)n
2nn! dxn
2. Obtain the Legendre polynomial P4 (x) directly from Legendre’s equation of order 4
by assuming a polynomial of degree 4, i.e.
π !
1
Pn(x) = (x + x2 − 1 cos t)n dt
π 0
0 −1 ≤ x ≤ 0
f (x) =
x 0≤x≤1
16
6. Find the first three coefficients in the expansion of the function
cos θ 0 ≤ θ ≤ π/2
f (θ) =
0 π/2 ≤ θ ≤ π
in a series of the form
∞
f (θ) = AnPn(cos θ) 0≤θ≤π
n=0
7. Obtain the associated Legendre functions P21 (x), P32 (x) and P23 (x).
8. Verify that the associated Legendre function P32 (x) is a solution of Legendre’s associ-
ated equation for m = 2, n = 3.
1
Pnm(x) Pkm(x) dx = 0 n = k
−1
for the associated Legendre functions P21 (x) and P31 (x).
1 m
2 2 (n + m)!
Pn (x) dx =
−1 2n + 1 (n − m)!
for the associated Legendre function P11 (x).
11. Obtain the Legendre functions of the second kind Q0 (x) and Q1 (x) by means of
dx
Qn(x) = Pn(x)
[Pn(x)]2 (1 − x2 )
12. Obtain the function Q3 (x) by means of the appropriate recurrence formula assuming
that Q0 (x) and Q1 (x) are known.
17
Selected References
2. Arfken, G., “Legendre Functions of the Second Kind,” Mathematical Methods for
Physicists, 3rd ed. Orlando, FL: Academic Press, pp. 701-707, 1985.
18
Chebyshev Polynomials
Reading Problems
d2 y dy
(1 − x2 ) −x + n2 y = 0 n = 0, 1, 2, 3, . . .
dx2 dx
d2 y
+ n2 y = 0
dt2
y = A cos nt + B sin nt
or as
or equivalently
where Tn(x) and Un(x) are defined as Chebyshev polynomials of the first and second kind
of degree n, respectively.
1
If we let x = cosh t we obtain
d2 y
− n2 y = 0
dt2
y = A cosh nt + B sinh nt
or as
or equivalently
y = ATn(x) + BUn(x) |x| > 1
1 ! n ! n
Tn(x) = x + i x2 − 1 + x − i x2 − 1
2
The sum of the last two relationships give the same result for Tn(x).
2
Chebyshev Polynomials of the First Kind of Degree n
The Chebyshev polynomials Tn(x) can be obtained by means of Rodrigue’s formula
(−2)nn! ! dn
Tn(x) = 1 − x2 (1 − x2 )n−1/2 n = 0, 1, 2, 3, . . .
(2n)! dxn
The first twelve Chebyshev polynomials are listed in Table 1 and then as powers of x in
terms of Tn(x) in Table 2.
3
Table 1: Chebyshev Polynomials of the First Kind
T0 (x) = 1
T1 (x) = x
T2 (x) = 2x2 − 1
T3 (x) = 4x3 − 3x
4
Table 2: Powers of x as functions of Tn(x)
1 = T0
x = T1
1
x2 = (T0 + T2 )
2
1
x3 = (3T1 + T3 )
4
1
x4 = (3T0 + 4T2 + T4 )
8
1
x5 = (10T1 + 5T3 + T5 )
16
1
x6 = (10T0 + 15T2 + 6T4 + T6 )
32
1
x7 = (35T1 + 21T3 + 7T5 + T7 )
64
1
x8 = (35T0 + 56T2 + 28T4 + 8T6 + T8 )
128
1
x9 = (126T1 + 84T3 + 36T5 + 9T7 + T9 )
256
1
x10 = (126T0 + 210T2 + 120T4 + 45T6 + 10T8 + T10 )
512
1
x11 = (462T1 + 330T3 + 165T5 + 55T7 + 11T9 + T11 )
1024
5
Generating Function for Tn (x)
The Chebyshev polynomials of the first kind can be developed by means of the generating
function
1 − tx
∞
= Tn(x)tn
1 − 2tx + t2 n=0
Tn(−1) = (−1)n
6
Orthogonality Property of Tn (x)
We can determine the orthogonality properties for the Chebyshev polynomials of the first
kind from our knowledge of the orthogonality of the cosine functions, namely,
0 (m = n)
π
cos(mθ) cos(n θ) dθ = π/2 (m = n = 0)
0
π (m = n = 0)
Then substituting
Tn(x) = cos(nθ)
cos θ = x
0 (m = n)
1
Tm(x) Tn(x) dx
√ = π/2 (m = n = 0)
−1 1 − x2
π (m = n = 0)
We observe that the Chebyshev polynomials form an orthogonal set on the interval −1 ≤
x ≤ 1 with the weighting function (1 − x2 )−1/2
7
where the coefficients An are given by
1
1 f (x) dx
A0 = √ n=0
π −1 1 − x2
and
1
2 f (x) Tn(x) dx
An = √ n = 1, 2, 3, . . .
π −1 1 − x2
The following definite integrals are often useful in the series expansion of f (x):
1
dx x3 dx
1
√ = π √ = 0
−1 1 − x2 −1 1 − x2
1
x dx x4 dx
1
3π
√ = 0 √ =
−1 1 − x2 −1 1 − x2 8
1
x2 dx π x5 dx
1
√ = √ = 0
−1 1 − x2 2 −1 1 − x2
π 2π π
θi = 0, , , . . . (N − 1) , π
N N N
where
xi = arccos θi
We have
8
0 (m = n)
1
N −1
1
Tm(−1)Tn(−1) + Tm(xi)Tn(xi) + Tm(1)Tn(1) = N/2 (m = n = 0)
2 2
i=2
N (m = n = 0)
The Tm(x) are also orthogonal over the following N points ti equally spaced,
π 3π 5π (2N − 1)π
θi = , , , ... ,
2N 2N 2N 2N
and
ti = arccos θi
0 (m = n)
N
Tm(ti)Tn(ti) = N/2 (m = n = 0)
i=1
N (m = n = 0)
The set of points ti are clearly the midpoints in θ of the first case. The unequal spacing of
the points in xi(N ti) compensates for the weight factor
W (x) = (1 − x2 )−1/2
9
Additional Identities of Chebyshev Polynomials
The Chebyshev polynomials are both orthogonal polynomials and the trigonometric cos nx
functions in disguise, therefore they satisfy a large number of useful relationships.
The differentiation and integration properties are very important in analytical and numerical
work. We begin with
and
and
or
Tn+1 (x) Tn−1 (x) 2 cos nθ sin θ
− = = 2Tn(x) n≥2
(n + 1) (n − 1) sin θ
10
Therefore
T1 (x) = T0
T0 (x) = 0
We have the formulas for the differentiation of Chebyshev polynomials, therefore these for-
mulas can be used to develop integration for the Chebyshev polynomials:
1 Tn+1 (x) Tn−1 (x)
Tn(x)dx = − +C n≥2
2 (n + 1) (n − 1)
1
T1 (x)dx = T2 (x) + C
4
T0 (x)dx = T1 (x) + C
T0∗ = 1
T1∗ = 2x − 1
T2∗ = 8x2 − 8x + 1
11
and the following powers of x as functions of Tn∗ (x);
1 = T0∗
1
x = (T0∗ + T1∗ )
2
1
x2 = (3T0∗ + 4T1∗ + T2∗ )
8
1
x3 = (10T0∗ + 15T1∗ + 6T2∗ + T3∗ )
32
1
x4 = (35T0∗ + 56T1∗ + 28T2∗ + 8T3∗ + T4∗ )
128
∗
Tn+1 (x) = (4x − 2)Tn∗ (x) − Tn−1
∗
(x) T0∗ (x) = 1
or
1 1 1
xTn∗ (x) = ∗
Tn+1 (x) + Tn∗ (x) + ∗
Tn−1 (x)
4 2 4
where
Tn∗ (x) = cos n cos−1 (2x − 1) = Tn(2x − 1)
1
xTn(x) = [Tn+1 (x) + Tn−1 (x)] n = 1, 2, 3 . . .
2
xT0 (x) = T1 (x)
12
To illustrate the method, consider x4
4 2
x2 x
x = x (xT1 ) = [T2 + T0 ] = [T1 + T3 + 2T1 ]
2 4
1 1
= [3xT1 + xT3 ] = [3T0 + 3T2 + T4 + T2 ]
4 8
1 1 3
= T4 + T2 + T0
8 2 8
∞
f (x) = anxn + EN (x) |x| ≤ 1
n=0
where |En(x)| does not exceed an allowed limit, it is possible to reduce the degree of the
polynomial by a process called economization of power series. The procedure is to convert
the polynomial to a linear combination of Chebyshev polynomials:
N
N
n
anx = bnTn(x) n = 0, 1, 2, . . .
n=0 n=0
It may be possible to drop some of the last terms without permitting the error to exceed
the prescribed limit. Since |Tn(x)| ≤ 1, the number of terms which can be omitted is
determined by the magnitude of the coefficient b.
The Chebyshev polynomials are useful in numerical work for the interval −1 ≤ x ≤ 1
because
1. |Tn(x)] ≤ 1 within −1 ≤ x ≤ 1
13
3. The maxima and minima are spread reasonably uniformly over the interval
−1 ≤ x ≤ 1
5. They are easy to compute and to convert to and from a power series form.
The following table gives the Chebyshev polynomial approximation of several power series.
14
Table 3: Power Series and its Chebyshev Approximation
1. f (x) = a0
f (x) = a0 T0
2. f (x) = a0 + a1 x
f (x) = a0 T0 + a1 T1
3. f (x) = a0 + a1 x + a2 x2
a2 a2
f (x) = a0 + T0 + a1 T1 + T2
2 2
4. f (x) = a0 + a1 x + a2 x2 + a3 x3
a2 3a3 a2 a3
f (x) = a0 + T0 + a1 + T1 + T2 + T3
2 4 2 4
5. f (x) = a0 + a1 x + a2 x2 + a3 x3 + a4 x4
a2 a3 3a3 a2 a4 a3
f (x) = a0 + + T0 + a1 + T1 + + T2 + T3
2 8 4 2 2 8
a4
+ T4
8
6. f (x) = a0 + a1 x + a2 x2 + a3 x3 + a4 x4 + a5 x5
a2 3a4 3a3 5a5 a2 a4
f (x) = a0 + + T0 + a1 + + T1 + + T2
2 8 4 8 2 2
a3 5a5 a4 a5
+ + T3 + T4 + T5
4 16 8 16
15
Table 4: Formulas for Economization of Power Series
x = T1
1
x2 = (1 + T2 )
2
1
x3 = (3x + T3 )
4
1
x4 = (8x2 − 1 + T4 )
8
1
x5 = (20x3 − 5x + T5 )
16
1
x6 = (48x4 − 18x2 + 1 + T6 )
32
1
x7 = (112x5 − 56x3 + 7x + T7 )
64
1
x8 = (256x6 − 160x4 + 32x2 − 1 + T8 )
128
1
x9 = (576x7 − 432x5 + 120x3 − 9x + T9 )
256
1
x10 = (1280x8 − 1120x6 + 400x4 − 50x2 + 1 + T10 )
512
1
x11 = (2816x9 − 2816x7 + 1232x5 − 220x3 + 11x + T11 )
1024
For easy reference the formulas for economization of power series in terms of Chebyshev are
given in Table 4.
16
Assigned Problems
1. Obtain the first three Chebyshev polynomials T0 (x), T1 (x) and T2 (x) by means of
the Rodrigue’s formula.
3. By means of the recurrence formula obtain Chebyshev polynomials T2 (x) and T3 (x)
given T0 (x) and T1 (x).
1 ! n ! n
Tn(x) = x + i 1 − x2 + x − i 1 − x2
2
√
where i = −1.
3
2
x = AnTn(x)
n=0
17
Hypergeometric Functions
Reading Problems
Introduction
The hypergeometric function F (a, b; c; x) is defined as
ab a(a + 1)b(b + 1) x2
= 1+ x+ + ···
c c(c + 1) 2!
∞
(a)n(b)n xn
= |x| < 1, c = 0, −1, −2, . . .
n=0
(c)n n!
where
Γ(c)
∞
Γ(a + n))Γ(b + n) xn
y1 = F (a, b; c; x) =
Γ(a)Γ(b) n=0 Γ(c + n) n!
1
Some Properties of F (a, b; c; x)
d ab
F (a, b; c; x) = F (a + 1, b + 1; c + 1; x)
dx c
dk (a)k (b)k
F (a, b; c; x) = F (a + k), b + k; c + k; x), k = 1, 2, 3, . . .
dxk (c)k
Recurrence Relations
There are 15 recurrence relations, one of the simplest is
Integral Representation
1
Γ(c)
F (a, b; c; x) = tb−1 (1 − t)c−b−1 (1 − xt)−a dt |x| ≤ 1
Γ(b)Γ(c − b) 0
Generating Function
2
where F (−n, b; c; x) denotes hypergeometric polynomials
n
(−n) (b) xi
F (−n, b; c; x) = −∞<x<∞
i=0
(c) i!
xp
Bx (p, q) = F (p, 1 − q; 1 + p; x)
p
Γ(p)Γ(q)
B1 (p, q) =
Γ(p + q)
Elliptic Integrals
π 1 1
K(k) = F , ; 1; k2
2 2 2
π 1 1
E(k) = F − , ; 1; k2
2 2 2
Legendre Functions
1−x
Pν (x) = F −ν, ν + 1; 1; ν = integer
2
π 1/2 Γ(ν + 1) 1 ν 1+ν 3 1
Qν (x) = F 1+ , ; ν+ ; 2 |x| > 1
2ν+1 Γ(ν + 3/2) xν+1 2 2 2 x
3
Chebyshev Polynomials
1 1−x
Tn(x) = F −n, n; ;
2 2
3 1−x
Un(x) = (n + 1)F −n, n + 2; ;
2 2
Bessel Function
(x/2)ν x2
Jν = 0 F1 ν + 1; −
Γ(ν + 1) 4
∞
(a)n xn
M (a; c; x) = −∞<x<∞
n=0
(c)n n!
also written as
1 F1 (a; c; x)
4
Differential Equation
xy + (c − x)y − ay = 0
xz + (2 − c − x)z − (1 + a − c)z = 0
z = M (1 + a − c; 2 − c; x) c = 2, 3, 4, . . .
and
y2 = x1−c M (1 + a − c; 2 − c; x) c = 2, 3, 4, . . .
is a second solution.
y = y 1 + y2
= C1 M (a; c; x) + C2 M (1 + a − c; 2 − c; x) c = 0, ±1, ±2 . . .
π M (a; c; x) x1−c M (1 + a − c; 2 − c; x)
U (a; c; x) = −
sin cπ Γ(1 + a − c)Γ(c) Γ(a)Γ(2 − c)
c = 0, −1, −2. . . .
5
Integral Representations
1
Γ(c)
M (a; c; x) = extta−1 (1 − t)c−a−1 dt c>a>0
Γ(a)Γ(c − a) 0
∞
1
U (a; c; x) = e−xtta−1 (1 + t)c−a−1 dt a > 0, x > 0
Γ(a) 0
with t = 1 − u
1
Γ(c)
M (a; c; x) = x
e e−xuuc−a−1 (1 − u)a−1 du
Γ(a)Γ(c − a) 0
Therefore
M (a; c; x) = ex M (c − a; c; −x) Kummer’s Transformation
ex = M (a; a; x)
1 −x2 1 1
erfc(x) = √ e U ; ; x2
π 2 2
xa
γ(a, x) = M (a; a + 1; −x)
a
Γ(a, x) = e−x U (1 − a; 1 − a; x)
2 1 3
erf (x) = √ x M ; ; −x 2
π 2 2
(x/2)p −ix 1
Jp (x) = e M p + ; 2p + 1; 2ix
Γ(p + 1) 2
(x/2)p −x 1
Ip (x) = e M p + ; 2p + 1; 2x
Γ(p + 1) 2
√ p −x
1
Kp (x) = π(2x) e U p + ; 2p + 1; 2x
2