Professional Documents
Culture Documents
James-Lamperti Type Laws
James-Lamperti Type Laws
Lancelot F. James∗ ,
1 Lancelot F. James,
Abstract: This paper obtains density and cdf formula, and various dis-
tributional identities, for random variables defined as the ratio of two in-
dependent positive random variables where one variable has an α stable
law, for 0 < α < 1, and the other variable has the law defined by power
tempering the density of an α stable random variable by a factor θ > −α.
When θ = 0, these variables equate with the ratio investigated by Lam-
perti which remarkably was shown to have a simple density. This variable
arises in a variety of areas and gains importance from a close connection
to the stable laws. This rationale motivates the investigations of its gen-
eralizations which we refer to as Lamperti type laws. Here specifically the
results are used to obtain results for 3 interesting quantities, which appear
in a variety of contexts. Explicit distributional formulae and identities are
derived for the class of positive generalized Linnik random variables. Then
the best known results for the density of the time spent positive of a Bessel
bridge of dimension 2 − 2α, and related quantities, are obtained. Addition-
ally, integral representations and other identities for a class of generalized
Mittag-Leffler functions are obtained. We will also describe the connections
between these results and show how they generalize previous results in the
literature.
AMS 2000 subject classifications: Primary 62G05; secondary 62F15.
Keywords and phrases: beta-gamma algebra, Bessel bridges, Dirichlet
means, Mittag-Leffler function, positive stable random variables.
1. Introduction
Let Sα , for 0 < α < 1, denote a positive α-stable random variable whose law is
specified by the Laplace transform,
α
E[e−λSα ] = e−λ
for λ > 0, and with density denoted as fα . Throughout, for τ > 0, let Gτ denote
a gamma(τ, 1) random variable, let βa,b denote a beta random variable with
parameters (a, b). Furthermore, for arbitrary random variables, X and R, when
we write the product XR, it will be assumed that X and R are independent
unless otherwise stated. In this paper, for θ > −α, our primary interest is to
∗ Supported in part by the grant HIA05/06.BM03 of the HKSAR
1
imsart-generic ver. 2007/04/13 file: LinnikArxiv.tex date: August 7, 2007
Lancelot F. James/Lamperti Laws 2
obtain density and cdf formula, and various distributional identities, for random
variables denoted as
Sα
Xα,θ = (1.1)
Sα,θ
where independent of Sα , Sα,θ is a random variable having density.
Γ(θ + 1) −θ
fSα,θ (t) = t fα (t)
Γ(θ/α + 1)
−δ Γ(θ + 1) Γ( (θ+δ)
α + 1) Γ(θ + 1)
E[Sα,θ ]= E[Sα−(δ+θ) ] = (1.2)
Γ(θ/α + 1) Γ(θ + δ + 1) Γ(θ/α + 1)
d
Note Sα,0 = Sα , hence we see that Xα,0 equates in distribution with the random
variable we denote as
Sα
Xα = ′
Sα
where Sα′ is independent of Sα and has the same distribution. Remarkably al-
though Sα does not have a simple density, except for α = 1/2, Lamperti(38)(see
also Chaumont and Yor ((11), exercise 4.2.1) shows that the density of Xα is
sin(πα) y α−1
fXα (y) = for y > 0. (1.3)
π y 2α + 2y α cos(πα) + 1
Owing to this we say that the random variables Xα,θ are of Lamperti type. Note
furthermore, when α = 1/2,
d G′1/2 d Gθ+1/2
X1/2 = and X1/2,θ = (1.4)
G1/2 G1/2
d
where G′1/2 = G1/2 , and Gθ+1/2 are all independent. See ((23), section 4.2) for
more on the variables (1.4) in relation to results of (14).
In general, the random variable, Xα , and its density, perhaps due to its close
relationship with a stable law, appears in a variety of places. Continuing with
the work of (38), for 0 < α < 1, and 0 < p < 1, let the random variable
Z t
d
Aα,p (t) = I(Bp(α) (s) > 0)ds
0
denote the time spent positive of a p-skewed Bessel process of dimension 2 − 2α,
(α)
denoted (Bp (s), s > 0), up till time t.
In general, see Barlow, Pitman and Yor (2), and Pitman and Yor (44), one
has
d
Aα,p (t)/t = Aα,p (1) ≡ Aα,p .
1/α
Moreover, setting c = (p/q) , from (2), one has that
d cXα
Aα,p = (1.5)
cXα + 1
Thus from (1.3) one can obtain the density of Aα,p given in (38),
Γ(1 + kα)
k=0
which equates with the Laplace transform of Sα−α . Furthermore, there is the
integral representation
∞ 1/α
sin(πα) e−λ y y α−1
Z
Eα,1 (−λ) = dy
π 0 y 2α + 2y α cos(πα) + 1
From the points above it is natural to think that the random variable Xα,θ
must have similar properties for more general models indexed by θ. We first
point out that our interest in the distributional properties of Xα,θ go beyond
the applications we shall address here. In particular, the results developed here
are applied in quite interesting and important ways in a companion paper (21).
However, our focus here is to obtain results parallel to those discussed in the
previous section. We will obtain formulae for the cdf and densities Xα,θ in
sections 2 to 4, which involves closely a parallel study of positive Linnik random
variables defined for θ > 0 as,
1/α
Lθ,α = Gθ/α Sα
and satisfying E[Pα,θ (C)] = p. Density and cdf formulae for Pα,θ (C) were re-
cently obtained by James, Lijoi and Pruenster (25) as part of a larger study of
more general linear functionals of Pα,θ . In general these formulae for Pα,θ (C)
were in the form of Abel transforms. These formula are useful for analytic cal-
culations. However, in the sense of representations with respect to strictly non-
negative functionals, the best results were obtained for the case of θ = 1 and
θ = 1 − α. Importantly, density and cdf formulae were obtained in ((25),section
6.2, Corollary 6.2) for the case of θ = α. The importance is that the random
variable Pα,α (C) satisfies
Z 1
d
Pα,α (C) = I(Bp(α,br) (s) > 0)ds (1.7)
0
(α,br)
where Bp (s) is now a p-skewed Bessel bridge of dimension 2 − 2α. That
is Pα,α (C) equates in distribution to the time spent positive up to time 1 of a
p-skewed Bessel bridge. This point may be read from Pitman and Yor ((49), sec-
tion 4, see eq. (75)). Unlike the result for the Bessel process, very nice formula for
the density of Pα,α (C), comparable to say the results obtained by ((25), section
6.1, example 6.1) for Pα,1−α (C) and Pα,1 (C) have proven elusive. Yano (54), in-
dependent of (25), has obtained a formula for the cdf of Pα,α (C) which is equiv-
alent to the one obtained by (25). Yano and Yano (55), unaware of the density
formula in (25) recently obtain a similar, albeit more implicit, formula for the
density of (1.7). These works continue a line of investigation by (5; 32; 33; 34; 53).
The density formula we obtain in section 5 for (1.7), leads to an improvement
over the existing results and are comparable to the expressions obtained in (25)
for the cases of θ = 1 and θ = 1 − α. In section 6, we show that in the case
of rational values of α Sα,θ and Xα,θ can be expressed in terms of products of
independent beta and gamma random variables. Some of the results in sections
2 to 6 appear in a preliminary version of this work in James (24). In section 7
where
Γ(θ/α + 1 + k)
(θ/α + 1)k = .
Γ(θ/α + 1)
So when θ = 0, one recovers the Mittag-Leffler function as,
(0)
Eα,1 (−λ) = E1α,1 (−λ) = Eα,0 (−λ)
The function (1.8) is a special case of the function introduced by Prabhakar (50),
∞
X (−λ)k (γ)k
Eγρ,µ (−λ) = (1.9)
k! Γ(ρk + µ)
k=0
where (ρ, µ, γ ∈ C, Re(ρ) > 0). That is the case where γ = (θ + α)/α and
µ = θ + 1. Our results overlaps with Djrbashian’s((17), p. 15 Theorem 1.3-5)
integral representation, in the case of θ = 0 and θ = α. The quantity (1.8)
represents a special sub-class of yet more general Mittag-Leffler type functions
which are discussed, for, instance in Kilbas, Saigo and Megumi (29). See also
(10; 19; 30; 1; 3; 31).
Our strategy to obtain results for Xα,θ and related quantities, rests in part on
results for random variables known as Dirichlet means ((12; 13)), and the closely
related class of random variables with distributions that are generalized gamma
convolutions (see (7)). In this section we will very briefly define these quantities
and the relevant results we shall use.
Suppose that X is a positive random variable with distribution function FX .
Now for θ > 0, we say that a positive infinitely divisible random variable Tθ
has a generalized gamma convolution with parameters (θ, FX ), if its Laplace
transform is of the form
−θ
E[e−λTθ ] = E[(1 + λMθ (FX )) ] = e−θψFX (λ) (1.10)
where Z ∞
ψFX (λ) = log(1 + λx)FX (dx) = E[log(1 + λX)]. (1.11)
0
d
and hence we write it as M = Mθ (FX ), if its Cauchy-Stieltjes transform of order
θ satisfies,
E[e−λTθ ] = E[(1 + λM )−θ ] = E[(1 + λMθ (FX ))−θ ] = e−θψFX (λ) (1.12)
d
That is if and only if Tθ = Gθ Mθ (FX ), where Tθ is GGC(θ, FX ).
We next proceed to define the cdf and density formula for Mθ (FX ). Define,
Z ∞
ΦFX (t) = log(|t − x|)I(t 6= x)FX (dx) = E[log(|t − X|)I(t 6= X)].
0
Furthermore, define
1
∆θ (t|FX ) = sin(πθFX (t))e−θΦFX (t) .
π
Rt
where using a Lebesque-Stieltjes integral, FX (t) = 0 FX (dx). Cifarelli and
Regazzini (13) (see also (14)), apply inversion formula to obtain the distribu-
tional formula for Mθ (FX ) as follows. For all θ > 0, the cdf can be expressed
as Z x
θ−1
(x − t) ∆θ (t|FX )dt (1.13)
0
provided that θFX possesses no jumps of size greater than or equal to one. If
we let ξθFX (·) denote the density of Mθ (FX ), it takes it simplest form for θ = 1,
which is
1
ξFX (x) = ∆1 (x|FX ) = sin(πFX (x))e−Φ(x) . (1.14)
π
Density formula for θ > 1 are described as
Z x
θ−2
ξθFX (x) = (θ − 1) (x − t) ∆θ (t|FX )dt (1.15)
0
An expression for the density which holds for all θ > 0, was recently obtained
by James, Lijoi and Prünster (25), as follows,
1 x
Z
θ−1
ξθFX (x) = (x − t) dθ (t|FX )dt (1.16)
π 0
where
d
dθ (t|FX ) = sin(πθFX (t))e−θΦ(t) .
dt
For additional formula, see (13), (51) and (39). In addition to these results we
shall be using the recent work of James (23) on Dirichlet means. One impor-
tant fact from that work is multiplication of a Dirichlet mean functional by an
independent beta random variable leads again to a Dirichlet mean functional.
Specifically, from Theorem 2.1 of James (23), for 0 < σ ≤ 1 and θ > 0 let
βθσ,θ(1−σ) denote a beta random variable independent of Mθσ (FX ), then
d
βθσ,θ(1−σ) Mθσ (FX ) = Mθ (FXYσ ) (1.17)
where FXYσ denotes the distribution of the independent product XYσ . This
result specializes when θ = 1, where now the density of M1 (FXYσ ) is obtainable
from (1.14) as shown in ((23),Theorem 2.2). Precisely,
xσ−1
ξFXYσ (x) = sin(πFXYσ (x))e−σΦFX (x) for x > 0, (1.18)
π
Other details that we shall use can be directly accessed from that manuscript.
A survey of some properties of generalized gamma convolutions and Dirichlet
means is given in (26).
Remark 1.1. There are several points to note before we continue. A GGC(θ, FX )
random variable may also be representable as a GGC(η, FR ), random variable
for eta 6= θ and FR 6= FX . That is the representation Tθ = Gθ Mθ (FX ) is not
d
unique. Furthermore if M = Mθ (FX ) it may also be equal in distribution to a
Dirichlet mean of another order and based on another cdf.
Remark 1.2. Tθ represents a sub-class of generalized gamma convolutions .The
larger class, which contains for instance Sα , is defined by replacing FX by an
appropriate sigma-finite measure and has been extensively studied in(7).
Remark 1.3. Throughout we will be using the fact that if R is a gamma then
d d
the independent random variables R, X, Z satisfying RX = RZ imply that X =
Z. This is true because gamma random variables are simplifiable. For precise
meaning of this term and conditions see Chaumont and Yor ((11), sec. 1.12
and 1.13). This fact also applies to the case where R is a positive stable random
variable.
2. Linnik laws
We now obtain results for Xα,θ through a study of the generalized positive
Linnik random variables, say Lθ,α, , defined for θ > 0. Using a double expectation
argument it follows that,
1/α
−λGθ/α Sα θ
E[e ] = (1 + λα )− α (2.1)
From Bondesson ((7), p.38), we see that Lθ,α , is a GGC(θ, FXα ) random
variable. Moreover, (see (42; 7; 16)), the Lévy exponent has several interesting
representations.
Z ∞
1
ψFXα (λ) = E[log(1 + λXα )] = ln(1 + λα ) = (1 − e−λs )lα (s)ds (2.2)
α 0
where,
1 1
lα (s) = Eα,1 (−sα ) = E[e−sXα ] = s−1 E[e−s/Xα ]
s s
is the Lévy density of the Linnik variable.
d 1/α
Note that although it is known that Lθ,α = Gθ/α Sα is a GGC(θ, FXα ) random
variable and hence one can deduce that,
d 1/α d
Lθ,α = Gθ/α Sα = Gθ Mθ (FXα ),
it is not known what Mθ (FXα ) is for general α and θ, nor has one worked out
d
its density or cdf. We will show that for θ > 0, Mθ (FXα ) = Xα,θ .
However, before proceeding to verify this, it is important to highlight impor-
tant related results when θ = α. It is known, (see (16; 28; 37)), that when θ = α
one has
1/α
Lα,α = G1 Xα = G1 Sα , (2.3)
where G1 is exponential (1) and Xα has density (1.3). As a side point, this sets
up a unique feature of this Linnik random variable with its Lévy density. That
is,
FG1/α Sα (x) = 1 − E[e−x/Xα ] = 1 − Eα,1 (−xα ), (2.4)
1
1/α G1 Gα
G1 = = (2.6)
Sα Sα
which may be found in ((11), section 4.19, see also p. 114 comment (a)) and
appears earlier in (2) and (45) in connection with local times of Bessel processes
and bridges. Combining (2.3) and (2.6) leads to the representation
1/α
Lα,α = G1 Xα = G1 Sα = Gα Xα,α .
Perhaps more importantly, multiplying Xα,α by βα,1−α and using James ((23),
Theorem 2.1), i.e. (1.17), it follows that
d
Xα = M1 (FXα Yα )) = βα,1−α Xα,α .
This fact, coupled with the explicit density of Xα in (1.3) will lead to explicit
expressions for ΦFXα .
With these points in mind we describe some more pertinent features of Xα .
d
Proposition 2.1. Let Xα = Sα /Sα′ , having density (1.3). Then,
(i) The cdf of Xα can be represented explicitly as
xα
1
FXα (x) = 1 − cot−1 cot(πα) + (2.7)
πα sin(πα)
y α sin(πα)
sin(παFXα (y)) = y α sin(πα(1 − FXα (y))) =
[y 2α + 2y α cos(πα) + 1]1/2
(2.9)
(iv) Additionally,
y α cos(πα) + 1
cos(παFXα (y)) = 1/2
[y 2α + 2y α cos(πα) + 1]
It it then easy to obtain the form of the cdf of (Xα )α by direct integration. [This
may be done using a mathematical package, if not immediately clear]. Now using
the fact that this equates with FXα (y 1/α ) yields statement [(i)]. Statement [(ii)]
then follows by using properties of the inverse cotangent. In order to establish
[(iii)], use (2.8) which yields the identity,
1/α
−1 sin(πα(FXα (y)))
y= FX (FXα (y)) = . (2.10)
α
sin(πα(1 − FXα (y)))
3. Distributional identities
The first result establishes the key distributional identities we discussed earlier.
d 1/α
Proposition 3.1. Let Lθ,α = Gθ/α Sα denote the Linnik variable which is
GGC(θ, FXα ) then, for θ > 0,
(ii) Since Lθ,α is a GGC(θ, FXα ) random variable, statement [(i)] implies that
Sα d d
= Xα,θ = Mθ (FXα )
Sα,θ
1/α d
(iii) The identity G1 Sα = G1 Xα indicates that
d d Sα
M1 (FXα Yα ) = Xα = (3.1)
Sα′
where FXα Yα denotes the cdf of Xα Yα .
d 1/α
Proof. Note that the density of Lα,θ = Gθ/α Sα is expressible as,
Z ∞ Z ∞
α
Cy θ−1 t−θ e−(y/t) fSα (t)dt = Cy θ−1 t−θα E[e−(y/t)Sα ]fSα (t)dt.
0 0
d
(ii) For θ > −α, G1+θ Xα,θ = Gθ+α Xα,θ+α . That is
is (θ + α)/(1 + θ). Statement [(ii)] follows from (1.17) and statement [(i)]. Now
statement [(ii)] combined with statement [(i)] of Proposition 3.1 leads to
Sα d Sα d 1/α
G1+θ = Gθ+α = G(θ+α)/α Sα
Sα,θ Sα,θ+α
The result then follows by the fact that Sα is simplifiable.
The next results contains more identities.
Proposition 3.3. Suppose that θ + α ≤ δ then G1+θ Xα,θ is equivalent in
distribution to,
h i1/α h i1/α
d
Gδ Xα,δ β( θ+α , δ−(θ+α) ) = G1+δ−α Xα,δ−α β( θ+α , δ−(θ+α) )
α α α α
d
Note that G1+θ Xα,θ = βθ+α,1−α Xα,θ+α . As consequences,
(i) For θ + α < δ
h i1/α
d
β1+θ,δ−(α+θ) Xα,θ = Xα,δ−α β( θ+α , δ−(θ+α) ) (3.5)
α α
(iv) The results above lead to parallel statements between Sα,θ and Sα,δ by
removing Sα on both sides of the equations. For example from statement
[(ii)]
−α d
Sα,θ −α
= Sα,δ β( θ+α , δ−(θ+α) ) (βδ,1+θ−δ )α (3.6)
α α
Proof. First use the fact from statement [(ii)] of Proposition 3.2 that
d 1/α
G1+θ Xθ = G θ+α Sα .
α
Now since δ > θ + α one may apply the beta-gamma algebra to the right hand
1/α
side by multiplying and dividing by Gδ /α. This leads to
h i1/α
d 1/α
G1+θ Xθ = β( θ+α , δ−(θ+α) ) [Gδ/α Sα ].
α α
The result is then concluded by using statement[(i)] and the fact that gamma
variables are simplifiable. Statment [(iv)] follows from the fact that Sα is sim-
plifiable.
It is clear from their description of Js,s/α in Bertoin and Yor ((4), Lemma 6,
d−α
statement [(iii)]) that Js,s/α = Sα,s for s > 0. So setting s = θ + α, one recovers
the first equality in (3.4) Now, less obviously, we can use (3.6) with δ = 1 + θ,
along with (3.7), to deduce that for t < s/α
(α) d −α
Js,t = Sα,α(t−1)
So what we can say is that our version identifies the equivalence between the
J (α) and Sα,θ variables, as well as provides additional interpretations. See also
James and Yor ((27), Corollary 1), for a closely related result.
We now focus on obtaining explicit distribution formulae for the pertinent ran-
dom variables based on their representations as Dirichlet means. In relation
to (1.13),(1.14), (1.15) and (1.16), Proposition 2.1 gives precise details on the
pertinent cdf FXα , it then remains to obtain a nice expression for the quantity
for x > 0. The key to calculating Sα (x) is the fact that we showed that Xα is
a mean functional of the type M1 (FXα Yα ), as described in (3.1) of Proposition
3.1. This sets up an equivalence between the form of the density of Xα obtained
by Lamperti (38) and that of M1 (FXα Yα ), obtained from (1.18). Hence we have
the following calculation.
Proposition 4.1. For 0 < α < 1, set Sα (x) := E[log |x − Xα |] = ΦFXα (x).
Then for x > 0,
1
Sα (x) = log(x2α + 2xα cos(απ) + 1).
2α
d
Proof. Since Xα = M1 (FXα Yα ), it follows by using (1.18) that the density of
Xα satisfies the equivalence,
1
fXα (x) = sin(πα[1 − FXα (x)])e−αSα (x) xα−1 .
π
Where on the left hand side we use the expression in (1.3). Now applying the
identity in (2.9) shows that,
1 xα−1 sin(πα)
fXα (x) = e−αSα (x) .
π [x2α + 2xα cos(πα) + 1]1/2
We now obtain a general description of the distribution of Xα,θ for θ > −α.
d
Proposition 4.2. The form of the cdf for Xα,θ = Mθ (FXα ) for all θ > 0, is
given by (1.13), with
1 sin(πθFXα (x))
∆θ (x|FXα ) = θ
π [x2α + 2xα cos(απ) + 1] 2α
where FXα is given in (2.7). Furthermore, a general expression for the density
of Xα,θ is obtained from (1.16) with
where
tα−1 sin(π[(θ + α)FXα (t) + (1 − α)])
∆θ+1 (t|FXY θ+α ) = (θ+α)
.
1+θ π [t2α + 2tα cos(απ) + 1] 2α
Expressions for the densities are also obtainable from this last expression.
Proof. Since Xα,θ is shown to be equivalent in distribution to the mean func-
tional Mθ (FXα ), its cdf results from applying (1.13) along with Proposition
2.1 and Proposition 4.1. The expression in (4.1) is obtained by differentiating
∆θ (x|FXα ) and using the trigonometric identity sin(w − z) = sin(w)cos(z) −
sin(z)cos(w), with w = πα and z = πθFXα (x). The expression in (4.2), which
is a special case of (4.1), follows after applying (2.9). Now for the general cdf of
Xα,θ , for θ > −α, in (4.3), we first use (3.3). We then apply (1.13) with 1 + θ
and FXY θ+α , in place of the generic θ and FX . Now from James ((23), equation
1+θ
(2.3)) we have for arbitrary XYσ that
Proposition 4.3. Let 0 < θ ≤ 1, then the densities of the random variables
h i1/α
d d
βθ,1−θ Xα,θ = M1 (FXα Yθ ) = Xα,1 β( θ , 1−θ )
α α
can be expressed as
1 xθ−1 sin(πθ[1 − FXα (x)])
θ .
π [x2α + 2xα cos(απ) + 1] 2α
The special case of Xα is recovered by setting θ = α. Setting θ = 1, gives the
density of Xα,1 which can also be directly obtained from (1.14) and hence is
equivalent to ∆1 (x|FXα ). Furthermore, for 0 < θ ≤ 1, the density of Lθ,α =
1/α
Gθ/α Sα is given by,
∞
1 e−xr sin(πθFXα (r))
Z
θ . (4.4)
π 0 [r2α + 2rα cos(απ) + 1] 2α
Proof. This result follows from an application of ((23), Theorem 2.2), that is ap-
ply (1.17) and (1.18), along with Propositions 2.1, statement [(ii)] of Proposition
3.3 and Proposition 4.1.
So far except for the case of Xα,1 , the densities of Xα,θ for θ > 0, are given
in terms of integrals involving functions that can take on negative values. In
contrast the densities of βθ,1−θ Xα,θ , for the range 0 < θ ≤ 1 have a nice form
given in Proposition 4.3. The next result shows how we can use Proposition 4.3
to obtain better expressions for Xα,θ , for the range θ ≤ 1 − α.
Proposition 4.4. Suppose that 0 < θ ≤ 1 − α, then
d
Xα,θ = β1,θ M1 (FXα Yθ+α ).
The next result allows one to use Proposition 4.4 to obtain expressions for
arbitrary θ as follows. We will also use this result in section 7.
Proposition 4.5. Set θ = kj=1 θj where θj > 0. Furthermore, let (D1 , . . . , Dk )
P
Qk
denote a Dirichlet random vector having density proportional to i=1 xθi i . That
d
is each Di = βθi ,θ−θi . Then,
k
d
X
Xα,θ = Dj Xα,θj
j=1
In this section we will focus on the special case of Xα,α as it can be used to
determine the density of the occupation time of a Bessel bridge. This random
variable also plays an important role in other contexts.
Now define the non-negative functions
α
sin(πα) min(x ,1) (x − r1/α )α−1 (1 − r2 )
Z
Iα,1 (x) = 2 dr
π 0 [r2 + 2r cos(απ) + 1]
and
max(x−α ,1)
sin(πα) (x − r−1/α )α−1 (1 − r2 )
Z
Iα,2 (x) = dr
π x−α [r2 + 2r cos(απ) + 1]2
Proposition 4.6. An expression for the density of Xα,α for all 0 < α < 1 is
of the form
Iα,1 (x) if 0 < x ≤ 1
fXα,α (x) = (4.6)
Iα,1 (1) − Iα,2 (x) if x > 1
where Iα,1 (1) − Iα,2 (x) ≥ 0, for x > 1
Proof. Using (1.16),and (4.2), one may write
Z x α−1
α−1 αy (1 − y 2α ) sin(πα)dy
fXα,α (x) = (x − y)
0 π [y 2α + 2y α cos(απ) + 1]2
Now note that (1 − y 2α ) is positive for 0 < y < 1, and negative for y > 1. Hence
it follows that if x ≤ 1, then by the change of variable r = y α , fXα,α (x) equates
with Iα,1 (x). For x > 1, one can split the integral above into the difference of
two positive quantities, where the first term is Iα,1 (1),the second term is Iα,2 (x)
which is seen by writing 1 − y 2α = −y 2α (1 − y −2α ) and applying the change
r = y −α .
Note that the representation of the density in terms of I1 and I2 is quite good
as the integrands in I1 and I2 are non-negative quantities. In the next result
we give expressions for ranges of α which can be considered to be even better.
Notice first that
sin(2πα) + 2xα sin(πα)
sin(2πα[1 − FXα (x)]) = (4.7)
1 + 2xα cos(πα) + x2α
d
Proposition 4.7. The following result hold for Xα,α = Sα /Sα.α .
d
(i) Suppose that α ≤ 1/2 then Xα,α = β1,α M1 (FXα Y2α ), where the density of
d
M1 (FXα Y2α ) = β2α,1−2α Xα,2α is expressible as
An expression for the range 2/3 < α ≤ 3/4 also follows from a combination
of Propositions 4.4 and Proposition 4.5. One can continue in this way for larger
values of α, but if one is interested in expressions for densities, Proposition 4.6
seems to be better in those cases.
Pα,θ (C) arises from exponentially tilting the density of Lθ,α = Gθ Mθ (FXα ).
Precise details of that operation may be found in ((23), section 3. This may
be checked, in the sense of tilting, by noting that the Laplace transform of a
GGC(θ, FA,p ) random variable is given by,
1/α
−c(1+λ)Gθ/α Sα
E[e ] θ
−α
1/α
= (q + p(1 + λ)α ) (5.1)
−cGθ/α Sα
E[e ]
which for θ > 0, equates with the Cauchy-Stieltjes transform of order θ of
Pα,θ (C).
We point out that the connection of Pα,θ (C), via tilting, to Xalpha,theta =
Mθ (FXα ), for θ > 0, that we have made appears to be a new insight. These
points using James ((23)), Theorem 3.1 and Proposition 3.1) allows us to use
the expressions for the density of Xα,θ to obtain alternative expressions, and in
the cases corresponding to Propositions 4.4,4.6 and 4.7, improvements on the
formula for Pα,θ (C) given in (25), for α 6= 1/2. Carlton (9, Remark 3.1) obtains
a nice description of the density for the case of (1/2, θ) for θ > −1/2
Proposition 5.1. The density of Pα,θ (C), denoted as fα,θ (y|p), is given by
θ−2 1/α
q 1/α y
(1 − y) q
fα,θ (y|p) = θ fXα,θ
qα p p1/α (1 − y)
Proof. This is a special case of statement[(i)] of James ((23), Theorem 3.1), with
1/α
c = (p/q) and
θψ θ/α
e FXα (c) = (1 + cα ) = q −θ/α .
The next result, which is related to Propositon 4.3, may be seen as an ex-
tension of the result of Pitman and Yor ((49), Proposition 15) which is the case
where θ = α, that is βα,1−α Pα,α (C)
Proposition 5.2. For each 0 < θ ≤ 1, the density of the random variable
M1 (FAα,p Yθ ) = βθ,1−θ Pα,θ (C) is
1/α
q y
1 y θ−1 sin(πθ[1 − FXα ( p1/α (1−y)
)])
(5.2)
π [y 2α q 2 + 2qpy α (1 − y)α cos(απ) + (1 − y)2α p2 ]θ/2α
Proof. There are two ways to obtain (5.2). The first way is to use directly (1.17)
and (1.18). The second is to use the fact that M1 (FAα,p Yθ ) relates to M1 (FXα Yθ )
by the tilting operation discussed in James ((23), section 3.1, Proposition 3.1).
Hence one applies that result to the density given in Proposition 4.3 to ob-
tain (5.2).
Now as a special case of ((25), Theorem 6.1), which is easily deducible from
(46), one has the following distributional relationship
d
Pα,θ (C) = βα+θ,1−α Pα,α+θ (C) + (1 − βα+θ,1−α )Yp . (5.3)
We now specialize the above results to the important case of the time spent
positive by a p-skewed Bessel bridge, which leads to improvements on the results
of (25; 54). The first general expression follows directly from Propositions 4.3
and 5.2.
Proposition 5.5. Set pα = p1/α /(q 1/α + p1/α ). Then using Proposition 4.2, an
expression for the density of Pα,α (C) for all α is of the form
where
1/α y
(
Iα,1 (( pq ) 1−y ) if 0 < y ≤ pα
hα,p (y) = q 1/α y (5.5)
Iα,1 (1) − Iα,2 (( p ) 1−y ) if pα < y < 1
The next result follows from a combination of statement[(i)] of Proposition
4.4. and Proposition 5.2
Proposition 5.6. Suppose that α ≤ 1/2 then then density of Pα,α (C) is ex-
pressible as,
sin(πα) 2α−1
2pαy 2α−1 (1 − y) gα (y)
π
where
Z 1 α−1 α
αuα (1 − u) [p(1 − y) uα cos(πα) + qy α ]du
gα (y) = 2,
α 2α
0 [q 2 y 2α + 2pqy α (1 − y) uα cos(απ) + u2α (1 − y) p2 ]
d
Where [β2α,1−2α Pα,2α (C)] = M1 (FAα,p Y2α ) has density,
for 0 < y < 1. The density of β2α,1−2α [1 − Pα,2α (C)] = M1 (FAα,q Y2α ) is
expressed similarly with q playing the role of p.
We now show that when α = m/n, for integers m < n the quantities Xα,θ and
Sα,θ can be expressed in terms of products and ratios of independent beta and
gamma random variables. The results for Sα,θ will extend the following result
for Sα as given in Chaumont and Yor ((11), p.113),
m−1
m ! n−1 !
m d n
Y Y
=n β k ,k( 1 − 1 ) Gk .
Sm n m n n
n k=1 k=m
Now using the fact that a positive stable random variable is simplifiable, we get,
m
n d Gθ
(Gn( θ ) ) = . (6.1)
m Sm
n ,θ
which is found in Chaumont and Yor ((11), p. 113), to both sides of (6.1). This
gives,
m−1 m−1
m ! n−1 !
m Y d n
Y Y
Gθ Gθ+j =n G θ +k Gθ+l
Sm ,θ m
j=1
m m m n m n
n k=1 l=m
and appealing to the fact that products of gamma random variables are simpli-
fiable.
Remark 6.1. Since Proposition 6.1 expresses the random variables Xm/n,θ and
Sm/n,θ in terms of products of independent beta and gamma random variables,
one may use to the result of Springer and Thompson (52) to express their den-
sities in terms of Meijer G functions. This is significant as integrals of Meijer
G functions, which constitute many special functions, can be computed by Math-
ematical packages such as Mathematica. See also (21).
d
Remark 6.2. In terms of the representation of Xα,θ = Mθ (FXα ) as a mean
functional. Proposition 6.1 generalizes the expression obtained by Cifarelli and
Melilli(14) for α = 1/2 to the case of α = m/n. See ((23), section 4.2) for
related discussions.
In this last section we prove some quite interesting results for the important
(θ/α+1)
generalization of the Mittag-Leffler function given by Eα,1+θ (−λ) as descrbed
in 1.8. In doing so, we also obtain some new representations for the density
1/α
of Gθ Sα , say fG1/α Sα , and relationships to the cdf of Xα,θ , say FXα,θ . In
θ
particular the forthcoming result can be seen as an extension of (2.4) and (2.5).
Note furthermore that using simple cancelations involving gamma functions
it is easy to show that for θ > 0,
Now by first conditioning on Sα,θ , and using statement [(i)], one gets
α (θ/α+1)
α
P(G1 > λ/Sα,θ ) = E[e−λ/Sα,θ ] = Γ(θ + 1)Eα,θ+1 (−λ).
The result in statement [(ii)] is concluded by the use of the distributional equal-
1/α d
ity, G1 Sα,θ = G1 Sα,θ /Sα , which is equivalent to G1 /Xα,θ . In order to obtain
the integral representation in statement [(iii)], use statement [(ii)] to get,
θ/α+1 1 G1
Eα,θ+1 (−λ) = E[FXα,θ ( 1/α )].
Γ(θ + 1) λ
yλ1/α λ
Statement [(v)] follows from statement [(i)] and Proposition 4.5. For statement
[(vi)] one uses the fact, see for instance Chamati and Tonchev ((10), equation
(2.5)), that Z ∞
θ/α −θ/α
λθ−1 e−λy Eα,θ (−λα )dλ = (1 + y α ) .
0
1/α
But this is equivalent to the Laplace transform of the random variable Gθ/α Sα ,
θ/α
as seen in (2.1). Hence, λθ−1 Eα,θ (−λα ) = fG1/α Sα (λ).
θ/α
References
[6] Bertoin, J., Fujita, T., Roynette, B. and Yor, M. (2006). On a par-
ticular class of self-decomposable random variables : the duration of a Bessel
excursion straddling an independent exponential time. Prob. Math. Stat., 26,
p. 315-366.
[7] Bondesson, L. (1992). Generalized gamma convolutions and related classes
of distributions and densities. Lecture Notes in Statistics, 76. Springer-
Verlag, New York.
[8] Bourgade, P., Fujita, T. and Yor, M. Euler’s formulae for ζ(2n) and
products of Cauchy variables. Electron. Comm. Probab. 12 73–80.
[9] Carlton, M. A. (2002). A family of densities derived from the three-
parameter Dirichlet process. J. Appl. Probab. 39, 764-774.
[10] Chamati, H. and Tonchev, N. S. (2006). Generalized Mittag-Leffler
functions in the theory of finite-size scaling for systems with strong
anisotropy and/or long-range interaction. J. Phys. A 39 469–478.
[11] Chaumont, L. and Yor, M. (2003). Exercises in probability. A guided
tour from measure theory to random processes, via conditioning. Cambridge
Series in Statistical and Probabilistic Mathematics, 13, Cambridge Univer-
sity Press.
[12] Cifarelli, D. M. and Regazzini, E.. (1979). Considerazioni generali
sull’impostazione bayesiana di problemi non parametrici. Le medie asso-
ciative nel contesto del processo aleatorio di Dirichlet I, II. Riv. Mat. Sci.
Econom. Social 2 39-52 (1979).
[13] Cifarelli, D.M. and Regazzini, E. (1990). Distribution functions of
means of a Dirichlet process. Ann. Statist. 18, 429–442 (Correction in Ann.
Statist. (1994) 22, 1633–1634).
[14] Cifarelli, D.M. and Mellili, E. (2000). Some new results for Dirichlet
priors. Ann. Statist. 28, 1390-1413.
[15] Devroye, L. (1990). A note on Linnik’s distribution Statist. Probab. Lett.
9 305-306.
[16] Devroye, L. (1996). Random variate generation in one line of code, in:
1996 Winter Simulation Conference Proceedings, ed. J.M. Charnes, D.J.
Morrice, D.T. Brunner and J.J. Swain, pp. 265-272, ACM.
[17] Djrbashian, M. M.(1993) Harmonic analysis and boundary value prob-
lems in the complex domain. Translated from the manuscript by H. M. Jer-
bashian and A. M. Jerbashian Operator Theory: Advances and Applications,
65. Birkhäuser Verlag, Basel.
[18] Fujita, T. and Yor,M. (2006). An interpretation of the results of the
BFRY paper in terms of certain means of Dirichlet processes. Preprint.
[19] Hilfer, R., and Seybold, H. J. (2006). Computation of the general-
ized Mittag-Leffler function and its inverse in the complex plane. Integral
Transforms Spec. Funct. 17 637–652.
[20] Hjort, N.L. and Ongaro, A. (2005). Exact inference for random Dirich-
let means. Stat. Inference Stoch. Process., 8 227–254.
[21] Ho, M.-W., James, L.F. and Lau, J.W. (2007). Gibbs partitions
(EPPF’s) derived from a stable subordinator are Fox H and Meijer G trans-
forms. arXiv:0708.0619v1 [math.PR]