You are on page 1of 23

# Position and height of the Global Maximum of a twi

e
dierentiable Sto hasti Pro ess


## Centre for Mathemati al S ien es

Mathemati al Statisti s
Lund University, Sweden

Abstra t

of

## T; Z ), the position and height of the global maximum

in a losed interval, is given. The formula is derived using the Generalized Ri e's

formula.

## The presented result an be applied both to stationary and non-stationary

pro esses under mild assumptions on the pro ess. The formula for the density is expli it
but involves integrals that have to be omputed using numeri al integration.

The

omputation of the density is dis ussed and some numeri al examples are given.

Keywords: Sto hasti pro ess, global maximum, supremum, generalized Ri e's formula, extremes

Resear h supported in part by the Knut and Ali e Wallenberg Foundation through the Gothenburg
Sto hasti Centre


Introdu tion

## Finding the distribution of global maximum is a lassi al problem in probability. Most of

the resear h on the properties of the global maximum of a sto hasti pro ess on entrates
on the distribution for extreme heights; see e.g. Leadbetter et al. (1983). Often the studied
pro ess is Gaussian, and the distribution is given by means of upper and lower bounds;
see Diebolt and Posse (1996) and referen es therein. Introdu ing the lo ation of the global
maximum makes the analysis more ompli ated.

## As far as we know the joint density of

position and height of the global maximum has not been studied before. We turn now to
some denitions and preliminary results.

## = C ([0; L; R ) be the spa e of ontinuous fun tions ! : [0; L 7! R. Denote by

F the ompleted -algebra of uniform onvergen e and let P be the probability measure
Let

dened on sets in

F . In what follows we assume that the sample paths of the pro ess have

absolutely ontinuous derivatives with probability one; see Cramr and Leadbetter (1967)
for suitable onditions.
In the following denition and lemma we assume that
tive and the se ond derivative of a fun tion
Denote by

!(t) is deterministi .

The deriva-

## !(t) will be denoted !_ (t) and ! (t), respe tively.

Z (!) and T (!) the height and position of the global maximum of !(t)

## Z (!) = sup f!(t) : t 2 [0; Lg ;

The position of the maximum

## f0; Lg, or in the interior of the interval, then !_ (T ) = 0.

Analysis of the

(1)

(T; Z )-distribution

for the ase when the global maximum is also a lo al maximum is a non-trivial problem.
Here, we shall derive the distribution of
variables,

## T; Z as the limit distribution of a family of random

Denition 1 For a xed level   0, dene A = f0; Lg [ ft 2 [0; L : !_ (t) = g, then the
position and height of the global -maximum T(!); Z (!) say, are given by
Z(!) = sup f!(t) : t 2 A g ; T (!) = inf ft 2 A : !(t) = Z(!)g :
2

## Obviously, for ontinuously dierentiable

onditions when

T  ; Z

onverges to

!, (T0 ; Z0 ) = (T; Z ).

T 0 ; Z0

as

the appendix.

## Lemma 2 If !(t), t 2 [0; L, is ontinuously dierentiable then 0  Z (!) Z(!)  L  .

Furthermore, if there is only one t 2 [0; L su h that !(t) = Z (!) then T ! T as  goes to
zero.
A simple example, in whi h

T

## having only one lo al maximum at

= !(t).
T0(!) = T (!) = t.
i.e.

Z (!)

Further, let

T,

2 (0; L).

Suppose that

is as follows.

Consider

## = !(L) then T(!) = L for all  > 0 while

In Se tion 2 we present the generalized Ri e's formula for expe ted number of marked
rossings. The formula will be used to derive

## F (t; h) - the distribution of T; Z .

be done in Se tion 3.) However the formula is valid only for almost all
are interested in the distribution of
of

T; Z for  = 0.

## F (t; h) as a fun tion of  are needed.

(This will

  0 values while we

## These issues and the omputation of

F0 (t; h) will

be dis ussed in Se tion 4, while in Se tion 5 some numeri al examples will be given. For
larity of presentation, several proofs are moved to an appendix.

Assume that

For a xed

## in the upward dire tion. The

following lassi al result is alled Ri e's formula, see Leadbetter et al. (1983).

Theorem 3 If the pro ess X (t) is Gaussian and the derivative X_ (t) exists in quadrati
mean (h 1 (X (t + h) X (t)) ! X_ (t) in quadrati mean as h goes to zero) then
+ (u) =

## zfX_ (0);X (0)(z; u) dz;

where fX_ (0);X (0) (z; u) is the joint density of X_ (0); X (0).
3

(2)

Ri e (1944, 1945) and Ka (1943), obtained (2) for a ertain lass of Gaussian pro esses and
polynomials with random oe ients, respe tively. The formula (2) has been generalized
in many dire tions. In this se tion we shall present the generalization whi h is parti ularly
suited for the problem dis ussed in this paper.
Under the assumptions of the theorem, if

f (u)

## version of the following onditional expe tation

+ (u)
E [jX_ (0)j1fX_ (0)>0g X (0) = u =
;
f (u)
where

1fg is equal to one if the statement "" is true and zero otherwise. In general, the left

u.

## an be omputed using the following integral

where

1 _
dH0 (s);
0 fX (s)>0;X (s)=ug

Con-

+ (u)

=
=u

a.a.

## 1fX_ (s)>0;X (s)=ug dH0(s)

0

E [jX_ (0)j1fX_ (0)>0g X (0) = ufX (0)(u);

E[

(3)

=u means that equality is valid for almost all u. A tually this result is true for any
_ (0)j < 1. The formula
stationary a.s. absolutely dierentiable pro ess X as long as E [jX

where

a.a.

(3) follows from Theorem 4, whi h in a more general version is given in Zhle (1984). In
that paper the Hausdorf area and the oarea theorem, see Federer (1969), has been used to
study properties of rossings for random elds. The one-dimensional version of the result,
given here, follows also from Bana h (1925), (see Ry hlik (2000) for details).

Theorem 4 Assume X (t) is a.s. absolutely ontinuous and Y (t) is ve tor valued a.s. mea-

surable, e.g. adlag. Let q : [0; L  C ! [0; 1) be a measurable fun tion. If E [jX_ (t)j < +1

Z

hZ

## 1fY (s)2A;X (s)=xgq(s; X ) dH0(s) dx

B
Z Z

=
E [jX_ (s)jq(s; X )1fY (s)2Ag X (s) = xfX (s) (x) ds dx:
R B

(4)

There is some di ulty in interpreting (4). The onditional expe tation

## E [jX_ (s)jq(s; X )1fY (s)2Ag X (s) = xfX (s) (x) = H (x; s)

is only well dened with probability one with respe t to

## (x; s) (H (x; s) an be seen as a lass

of fun tions). In order to use the formula we need rst to spe ify the version of onditional
expe tation we wish to onsider. The problem of hoosing a "proper" onditional density is
a deli ate issue and will be dis ussed later on.
In the following remark we demonstrate that Eq.(3) follows from Theorem 4.

Remark 5 Consider formula (4) and hoose B = [0; 1, Y (s) = X_ (s), A = (0; +1) and
q(s; X ) = 1fX (s)ug then
Z u
hZ 1
i
E
1fX_ (s)>0;X (s)=xg dH0(s) dx
1
0
Z u Z 1

E [jX_ (s)j1fX_ (s)>0g X (s) = xfX (s) (x) ds dx:
=
1 0
Now by dierentiating both sides of the equation on u and by assumed stationarity of X we
obtain (3). Note that the dierentiation on u implies that Eq. (3) is proved only for almost
all u.

The distribution of T ; Z .

T  ; Z .

hZ

dx =

## X (s) = !_ (s), Y (s) = !(s), A = (

Let

q(s; !_ ) =
(Note that in Denition 1,
fun tion of the derivative

8
<

1
:
0
T

## Tu = s where u = max(0; w_ (s));

if

otherwise.

is a fun tion of

!.

However,

T

an also be dened as a

## !_ , sin e T(!) = T ( + !) for any onstant .)

Now it is easy to

see that

Z t

0 0

x

## and hen e the next theorem follows from Theorem 4.

Theorem 6 Consider ! with a.s. absolute ontinuous derivative !_ su h that E [j! (t)j < 1
for all t 2 [0; L. If for all t 2 [0; L the distribution of the derivative pro ess has a density

## f!_ (t) (u) then

= E 1f!(0)hg1fT (!)=0g ;


P (T = L; Z  h) = E 1f!(L)hg1fT (!)=Lg ;
P (T = 0; Z  h)

(5)

(6)

## and for any t 2 (0; L),

P (0 < T  t; Z  h)

a.a.

t h

## f!_ (s) () ds;

(7)

where a.a.
=  means that equality holds for almost all   0.

Proof:

q(s; !_ ).

## This is done in Lemma 8.

Assume that a version of onditional expe tation is hosen and then the distribution
fun tion of

## T ; Z is for almost all  equals the following fun tion

8
>
>
>
<

E 1f!(0)hg1fT (!)=0g


F (t; h) = E 1f!(0)hg 1fT (!)=0g + F (h; t; )
>
>
>
: 
E 1f!(0)hg1fT (!)=0g + E 1f!(L)hg1fT (!)=Lg + F (h; L; )


if

if

if

t = 0;

t 2 (0; L);
t = L;

(8)

where

F (t; h; ) =

t h

(9)

## Corollary 7 Under the assumptions of Theorem 6, if F (L; h) is a ontinuous fun tion of

 and h, then P (Z  h) = F0 (L; h).

Proof:

By Lemma 2,

Sin e

T

## T0 , the analysis of the joint distribution of T; Z

di ult. The key is to demonstrate that with probability one there is only one

!(t)

= Z (!).

T

to

is more

t su h that

## following lemma (the proof is given in the appendix).

Lemma 8 Assume ! is a.s. ontinuously dierentiable and the distribution of the derivative
!_ (t) has a density f!_ (t) (x) for all t 2 [0; L; then ea h of the following hold:
(0) The fun tions q(t; !_ ), 1fT (!)=0g (!) and 1fT (!)=Lg(!) are measurable.
(I) If the density f!(t) (x) exists and is bounded in x and in t 2 [0; L then, for any   0,

## P (Z = h) = 0 and P (T = t) = 0 for all h and t 2 (0; L).

(II) If, in addition to the assumptions in (I), for any > 0 the joint density f!(s) !(L);!_ (s) (x; y)
exists and is bounded in x; y and in s 2 [0; L , then

lim P (T = 0; Z  h) =
lim P (T = L; Z  h) =
!0
!0

P (T

= 0; Z  h);
P (T = L; Z  h):

## 0 the joint density

f!(t) !(s);!_ (s);!_ (t) (x; y; z ) exists and is bounded in x; y; x and in (s; t) 2 A = f(s; t) 2
[0; L  [0; L : jt sj  g then (T; Z) ! (T; Z ) a.s. as  goes to zero, and

(III) If, in addition to the assumptions in (I) and (II), for any >

!0

(10)

## f(t; x), the joint density of T ; Z.

Corollary 9 Assume that the joint density f!(t);!_ (t) (x; u) exits for all t 2 [0; L then, for
almost all , the density of T ; Z is given by

## f(t; x) = fE (t; x) + fC (t; x);

(11)

where

fE (t; x) = E 1fT =0g !(0) = x f!(0) (x) 0 (t) + E 1fT =Lg !(L) = x f!(L)(x) L (t);
and

fC (t; x) = E j! (t)j q(t; !_ ) !_ (t) = ; !(t) = x f!(t);!_ (t) (x; );

where s (t) = (t s) is the Dira fun tion. The onditional expe tations are understood to
be those used in (9).

## Evaluation of the density of T ; Z .

In this se tion we shall dis uss the hoi e of the version of the onditional expe tation in
the denition of

X; Y

fX;Y (x; y)

(Here

(12)

## Obviously, if the joint density of

(!(t); !_ (t); ! (t); q(t; !_ )) is known then (12) an be used to ompute the fun tion F (t; h; ) or
the density f (s; x). However q (t; !
_ ) is a fun tion of an innite sequen e f!(si ); !_ (si )g1i=1 ,
see below, and hen e the density is in general unknown. Consequently we shall dene the
expe tation as a limit of a sequen e of suitable approximations.

Let

fsi g1
i=1 , s1 = 0; s2 = L, be a dense subset of [0; L and let
qN (t; !_ )

## = 1f(!(t) !(s )0 or !_ (s )!_ (t)+) for all iN g ;

qN (0; !) = 1f(!(0) !(s )0 or !_ (s )) for all iN g ;
qN (L; !) = 1f(!(L) !(s )0 or !_ (s )) for all iN g ;
where

x+ = max(0; x).

(13)

## f(t; x; N ) = fE (t; x; N ) + fC (t; x; N );

(14)

where



fE (t; x; N ) =E qN (0; !) !(0) = x f!(0)(x) 0 (t)

## + EqN (L; !) !(L) = x f!(L)(x) L(t);



fC (t; x; N ) =E j! (t)j qN (t; !_ ) !_ (t) = ; !(t) = x f!(t);!_ (t) (x; ):

Sin e

qN (t; !_ )  q(t; !_ ) while qN (0; !)  1fT =0g , qN (L; !)  1fT =Lg

almost all

f(t; x)

for all

f(t; x)

## in (11), the fun tions

f (t; x; N )

 f(t; x; N )

a.a.

(t; x).

Next we shall derive two su ient onditions for the onvergen e of
version of

are, for

f(t; x; N )

to a

## f(t; x; N ), whi h always exists

sin e the fun tions are positive and non-in reasing, an be used to dene the onditional
expe tation in (11). Thus we an say that

f (t; x)

is the limit of

f (t; x; N )

for all

  0.

The rst ondition, (A), will be used in onstru tion of numeri al algorithm to ompute the
density of

## with known joint densities of

Theorem 10 Assume that, for any N and t 2 [0; L, (13) has a non-degenerated density.
Further, suppose

0
(A) If

L 

## E j! (s)j !_ (s) =  f!_ (s)() ds < 1:

lim
N !1

+1 Z

(15)

f (s; x; N ) ds dx = 1
1 0
then, for all t 2 (0; L) and almost all   0, f (t; x) = limN !1 f(t; x; N ).

2 [0; L

(16)

## and any > 0, the densities

f!_ (s) !_ (t) (y), f!(s) !(t);!_ (s) !_ (t) (x; y) are bounded in x; y and in s su h that jt sj 
then, for almost all   0, f(t; x) = limN !1 f (t; x; N ).

Proof:

## Proof is given in the appendix.

Lemma 8 and Theorem 10 gives onditions for the validity of the following approximation
s heme. First, dene

## f (s; z ) to be limN !1 f(t; z ; N ) then, with

F (t; h; N ) =

h Z t

1 0

f(s; z ; N ) ds dz

we have

as

 ! 0.

F
N !1 

a.a.

Consequently, if

uniformly then

## fT;Z (t; x) = Nlim

f (t; x; N ):
!1 0

Often ontinuity of

f(t; x; N ) an be proved.

## Che king uniform onvergen e of

f (t; x; N ) is

a mu h more di ult problem. However in pra ti e, the hardest problem is the omputation
of

## f (t; x; N ), what shall be dis ussed next.

When (numeri al) omputation is possible, then often one an also justify the assumption

of the ontinuity of

## The uniform onvergen e assumption an

then be repla ed by the following ondition: Assume that for su iently small

 2 [0; 0

ZZ

f(s; x; N ) ds dx < 1 + ;
10

and all

then

omputed using

## f0 (t; x; N ) with an error of

jP (T  t; Z  h) F0 (t; h; N )j 

(t; h). Finally, in order to redu e the amount of numeri al integration, the
denition of the density f0 (t; x; N ) needs to be modied as follows

## for all values of

fN (t; x) = fE (t; x; N ) + E j! (t)j q~N (t; !) !_ (t) = ; !(t) = x f!(t);!_ (t) (x; );
where

## q~N (t; !) = 1f!(si )!(t)

Obviously

fN (t; x)

density of
density

(!):

fN (s; x) ds dx  1 + ;

T; Z .

Sin e the

si

hoose

iN g

ZZ

then

for all

si

for Gaussian

!.

## 4.1 Approximation of T; Z -density for Gaussian ! .

In this subse tion, we shall give su ient onditions involving only the ovarian e stru ture
of the Gaussian pro ess

## under whi h the assumptions of Lemma 8 and Theorem 10 are

satised.
Let

be a stationary Gaussian pro ess su h that the fourth spe tral moment

and the fourth order derivative of the ovarian e fun tion satises

## r(4) ( ) = 4 o(j log j jj );

11

4

exists

as

! 0, for some > 1. Then ! (s) is a.s. ontinuous, see Cramr and Leadbetter (1967),

and hen e the assumptions of Theorem 6 and Corollary 9 are satised. If in addition, we
assume that spe tral measure of

## ontains a ontinuous omponent, then, for any

s 6= t,

(!(s); !(t); !_ (s); !_ (t)) has a nonsingular joint density, with ovarian e matrix that depends
in a ontinuous way on

t s.

## Hen e the assumptions (I-III) of Lemma 8 are satised and

we an on lude that

## lim P (T  t; Z  h) = P (T  t; Z  h):

!0

Sin e the assumptions of Theorem 10 (B) are also satised for any value of

t, we an then

dene

f (t; x) =

lim (t; x; N ):

f
N !1 

Finally, sin e the spe tral measure ontains ontinuous omponents, it is lear that, for any

## N , the joint density

!(s1 ); !_ (s1 ); : : : ; !(sN ); !_ (sN ); !(t); !_ (t); ! (t):
exists and that the fun tions

## f(t; x; N ) are ontinuous.

The non-stationary Gaussian pro ess used in the next se tion is derived from the stationary one by means of onditioning on values at a nite number of points

tn < L.

## t i , 0 < t1 < : : : <

Studies of this type of pro esses are motivated by pra ti al appli ations for whi h

the random fun tion is observed at some xed time points, see Sj (2001) for more detailed
dis ussion. If the measurements onsists of the random fun tion plus some random noise
(su h as zero mean iid. Gaussian variables), then one an modify the arguments presented
for stationary Gaussian
error the densities of

!(ti )

## Theorem 10 are not satised.

In order to resolve this problem one has to hoose points
with the onditioning times

ti

and then

t; s1 ; : : : ; sN

## f(t; x; N ) exists and is still ontinuous.

Next it is

easy to reformulate the assumptions in the lemma (and the theorem) so that this spe ial
ase is overed. For larity of presentation we have hosen not to do it. In order to give

12

some indi ation of what type of modi ation is required we give an example: in Lemma 8,
(III) the denition of

## A f(s; t) 2 [0; L  [0; L : jt sj  g \

5

n
\
i=1

f(s; t) : js ti j  ; jt ti j  g:

Examples

## We will demonstrate the result with some examples.

sian pro ess with a rather periodi al behavior, it has a typi al o eanographi spe trum,
and evaluate the density for the global maximum for two dierent

L.

As a nal exam-

ple we study a non-stationary Gaussian pro ess that is reated from a stationary ditto
by onditioning on the value at some points.

respe tively.

## WAFOWave Analysis in Fatigue and O eanography, whi h is available without harge

at

http://www.maths.lth.se/matstat/wafo/.

## 5.1 Stationary pro ess

The intuition about the stationary pro ess may say that the position of the global maximum
should be uniformly distributed, ex ept at the endpoints. Our examples will show that it
is not always true. The density is always perfe tly symmetri around the midpoint of the
interval, but its shape depends on the width of the interval relative to the periodi ity and
the irregularity of the pro ess. A very narrow-banded pro ess, gives a rather dierent result
than a broad-banded pro ess. For the sake of larity we present
sin e

fE

## f E (0; x) and f E (L; x), while f C (s; x) is

truly two-dimensional.
The pro ess in our stationary example has zero mean, approximately

## 0:12 zero-up rossings

0:17 lo al maxima per unit interval. First we have evaluated the density of the position and height of the global maximum in the interval [0; 3 where the expe ted
number of lo al maxima is roughly 0:5 (Figure 1), and then in the interval [0; 9, whi h is

## per unit interval, and

13

0.09
0.08
0.05

0.07

0.04

0.06

0.03

0.05

0.02

0.04

0.01
0.03
0
8

0.02
6

4
2

0.01

2
4 0

height

0
4

position

2
height

Figure 1: Density of position and height of the global maximum in the short interval
Left:

Interior density

density

f C (s; x).

Right:

f E (0; x),

[0; 3.

## the right endpoint

0.04
0.035
0.04
0.03
0.03
0.025
0.02

0.02

0.01

0.015

0
8

0.01
6

0.005

0
height

2 0

0
position

height

Figure 2: Density of position and height of the global maximum in the longer interval
Left:

Interior density

density

f C (s; x).

Right:

14

f E (0; x),

[0; 9.

## 1:5 lo al maxima (Figure 2).

The short interval gives an almost uniform density for the position, but also larger
probability to have the global maximum at one of the endpoints. The proportion is
the interior of the interval to

41% in

## In the wider interval it is more likely

that the global maximum is in the entral part of the interval, espe ially the smaller maxima
are likely to be there, while the high lo al maxima are almost uniformly distributed. The
proportion between interior and endpoints in this ase is

82% to 18%.

## With the shorter

interval it is also more probable to have a low global maximum with height below

0) = 0:18 for the short interval, while P(Z  0) = 0:004 for the longer.

0: P(Z 

We also ompared the evaluated densities to empiri al results based on simulation; here
we present the results for the short interval. We simulated 500 repli ations of a stationary
pro ess with the given spe tral density.
observed.

In the interval

## Figure 3 shows the simulated results ompared to the evaluated density.

The

umulative fun tions integrated from the sub-densities are normalised to have maximum
value

1.

## 5.2 Pro ess onditioned on observations  a non-stationary pro ess

This se tion exemplies the situation where we would like to nd the maximum of a pro ess,
given that we have a number of observations of it. We started by simulating a stationary
Gaussian pro ess, the simulation was `observed' at four randomly lo ated points, and a zero
mean Gaussian measurement error was added with varian e

## 1=10 of the varian e of the

stationary pro ess. Conditional on the observations (with errors), we obtain a new pro ess
that is a non-stationary Gaussian pro ess with mean fun tion equal to the onditional mean,
and ovarian e fun tion equal to the onditional ovarian e.
In a real situation the ovarian e fun tion of the stationary pro ess normally has to
be estimated, but in this example it is taken as known, i.e., the ovarian e fun tion we
simulated from. The stationary ovarian e fun tion used is of the type
parameters su h that the ovarian e fun tion is almost zero after

15

a exp( b 2 ),

with

## = 1:2, so at the distan e

0.8

4
height

0.6
2
0.4
0
0.2

2
4

position

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

2
height

height

Figure 3:

Simulation of position

position

and height

## of the global maximum in

[0; 3,

500

repli ations. The out ome was su h that 202 maxima were lo ated in the interior, 143 at
the left endpoint, and 155 at the right, whi h is lose to the analyti al proportion.

Left
dots.

Left )

## : Contours of the density in Figure 1 (

Top Right :

Empiri al distribution of

## with interior pairs of

Top

(T; Z ) marked by

## T , given that the maximum is in the interior of

the interval (irregular line), together with the umulative distribution fun tion, integrated
from

fC

and normalised.

Bottom Left :

Empiri al distribution of

## at the left endpoint,

and at the right endpoint (two irregular lines), together with the umulative distribution
fun tion integrated from

## f E (0; x) and normalised. Bottom Right :

Empiri al distribution of

Z , given that the maximum is in the interior (irregular line), together with the umulative
distribution fun tion integrated from

fC

and normalised.

16

60
50
40

height

30

x 10

3.5

10
30
50
70
90
95
99

20

2.5

10

0
1.5
10
20

30

0.5

40
0

0.5

1.5
position

ontours en loses

fC

2.5

0
0

10

20

30
40
height

50

60

70

## The observations are

marked by ` '. The onditional mean, i.e., the re onstru ted mean, is given by a solid urve,
anked by approximative onden e bands (dashed urves). Right : Left endpoint density

## The measurement error varian e is

a=10.

The reason for hoosing only four observations is larity, with many observations the density
gets very on entrated.
To the left in Figure 4 there is a ontour plot of the interior density

fC.

The lo ation

and value of the observations are marked by ` '. The plot also shows the onditional mean
fun tion, point-wise anked by approximative onden e bands evaluated pointwise as the
mean fun tion

 two times the standard deviation. To the right in Figure 4 are the endpoint

## densities. The proportions are

91% in the interior, 0:8% to the left, and 8:2% to the right.

To illustrate the inuen e of the observation error we have repeated the same evaluation, but this time with the observation error varian e equal to
non-stationary pro ess has varian e

0.

## 0 at the observations, i.e., the situation ommented in

Subse tion 4.1. The result is shown in Figure 5. The dashed urves oin ide with the solid
at the observations sin e the onditional varian e is

17

0 there.

## The ontours of this density

are more errati depending on numeri al problems due to the fa t that the distribution is
nearly singular lose to the observations. Exa tly at the observation points, the density is

0 for all heights. The proportions this time are 95:7% in the interior, 0:1% to the left, and
4:2% to the right.
3

60
50
40
30

2.5
Level curves enclosing:
10
30
50
70
90
95
99

20
height

x 10

1.5

10
0

10
20
0.5
30
40
0

0.5

1.5
position

2.5

0
0

10

20

30
40
height

50

60

70

## f C for a pro ess onditioned on observations without observation

error. The observations are marked by ` '. The onditional mean, i.e., the re onstru ted
mean, is given by a solid urve, anked by approximative onden e bands (dashed urves).
Right :

## f E (0; x) (solid), and right endpoint density f E (3; x) (dashed).

Referen es
Bana h, S. (1925). Sur les lignes re tiables et les surfa es dont l'aire est nie.

7, pp.

Fund. Math.,

225-237.

Bulinskaya, E. V. (1961). On the mean number of rossings of a level by a stationary Gaussian pro ess.

435-438.

J.

## Wiley & Sons, New York.

Diebolt, J. and Posse, C. (1996).

18

Pro esses.

## The Annals of Probability, 24(3),11041129.

Federer, H. (1969).

Bull.

1215-1228.

## Bell Syst. Te hn. J., 23, pp.

282-332.
Ri e, S. (1945). The mathemati al analysis of random noise.

## Bell Syst. Te hn. J., 24, pp.

46-156.
Ry hlik, I. and Lindgren, G.(1993). CROSSREG - a te hnique for rst passage and wave
density analysis. Probability in Engineering and Informational S ien es
Ry hlik, I. (2000).

7, 125-148.

## level rossings. Extremes,

3:4, 331-348.

Sj, E. (2001). Conden e Regions for Lo al Maxima and Minima of Surfa es Re onstru ted
from Finite Data. Methodology and Computing in Applied Probability,

3 145-159.

Zhle, U. (1984). A General Ri e Formula, Palm Measures, and Horizontal-window Conditioning for Random Fields.

## Sto hasti Pro esses and their Appli ations, 17:265283.

Appendix

Proof of Lemma 2:
We demonstrate rst that

!(t) > Z

implies

!_ (t) < .

Suppose

## !(t) > Z (!),

and

!_ (t) >

  0 (!_ (t) =  and !(t) > Z is impossible). Sin e !(t) > !(L) then there is t0 = inf fs 2
(t; L) : !_ (s) < 0g. Obviously !_ (s)  0 for all s 2 (t; t0). Now, for ontinuous !_ there is
s 2 (t; t0 ) su h that !_ (s) =  and !(s) > Z (!), whi h is a ontradi tion.

Consequently, if

!(t) > Z then !_ (t) < , hen e the distan e from Z (!) to Z (!) has to be less then L.
se ond statement of the lemma is obvious.

19

The

Proof of Lemma 8:
In the proof we shall employ Bulinskaya lemma, given next for ompletness, see Cramr,
Leadbetter (1967), p. 76 for the proof. Adler (1981) Theorem 3.2.1 presents similar type of
result for elds whi h in slightly modied form will be used in following proofs.

Lemma 11 (Bulinskaya (1961)) Let u be xed. If one dimensional density ft (x) of the pro-

ess  (t) is bounded in x and in 0  t  1, and if  (t) has, with probability one, a ontinuous
sample derivative _(t), then the probability is zero that _(t)
for any t in 0  t  1.

= 0,  (t) = u simultaneously,

We beginn with the proof of (0). First we give an alternative hara terizations of fun -

## 1fT (!)=0g(!), 1fT (!)=Lg(!) and then demonstrate their

measurability. Only measurability of q (t; !
_ ) will be given. (Measurability of the indi ators
tions

q(t; !_ )

## !(s; t) = !(t) !(s) (obviously !(s; t) = Rts !_ (z) dz is a fun tion of

!_ ). The partial derivative !12 (s; t) = !_ (t) !_ (s) then, for a xed t
Let us introdu e

q(t; !_ ) = 1

^
and

ts<L

## !(s; t) > 0 or !12(s; t) > 0

0<s<t
!(t; s)  0 or !12 (t; s) < 0:

(17)

0<s<t !(s; t) > 0 or !12 (s; t) > 0 means that ! is higher or growing faster at
t than at any other s 2 (0; t).) Next for any   0

(Simply,

## T (!) = 0 , !(0; L)  0 and

T (!) = L , !(0; L) > 0 and
Let

0<s<L
^

0<s<L

(18)

## !(s; L) > 0 or !_ (s) < :

(19)

fsi g1
i=1 be a ountable, dense subset of [0; L, su h that s1 = 0, s2 = L. Dene
A (t; !)
A+ (t; !)

^_^

^_^

n k i

n k i

## f(si  t n 1 ) ) (!(si ; t)  k 1 or !12 (si ; t)  k 1 )g

f(si  t + n 1 ) ) (!(t; si )  0 or !12 (t; si )  k 1 )g
20

where

n; k; i are integers.

## Now, it is easy to see that for ontinuously dierentiable

q(t; !_ ) = 1 , !(0; t) > 0 and !(t; L)  0 and A (t; !) and A+ (t; !) are true:
Now, sin e

(20)

!(s; t); !12 (s; t) are measurable fun tions (s; t; !_ ) and hen e, for xed si ; x; y,
1f!12(s ;t)xg[f!(s ;t)yg(t; !)1f!(0;t)>0g\f!(t;L)0g(t; !)
i

## (t; !). Consequently q(s; !_ ) is measurable too and (0) is prooved.

We turn now to the proof of (I). First, note that the assumed existen e of the density
of

## Next we turn to the ondition

= h) = 0 for any xed h and   0. Obviously, sin e the density of !(0) and
!(L) exists then P (T = 0; Z = h) = 0 and P (T = L; Z = h) = 0. Now, sin e the density
of ! (t) is bounded then employing Bulinskaya lemma for  (t) = ! (t) + t implies that for a

that

P (Z

xed value

h and 

whi h implies

P (!(t) = h; !_ (t) = ;

for any

t 2 [0; L) = 0;

(21)

## We shall now proof (II). Denote by

Obviously the fun tions

a; b

a()



(18-19).

## lim a() = 1fT =0g; lim

!0 b() = 1fT =Lg:

!0

## C into three subsets. First, onsider ! su h that

T (!) = 0 and hen e a() = 1 (similarly b() = 0).

T (!) =

## 0, then for all   0 we have

The se ond ase is when 0 < T (! ) < L. Then there exist t 2 (0; L) su h that !
_ (t) = 0 and
!(t) > !(0). Sin e !(0) < !(t) then a() tends to zero as  goes to zero. We turn now to
he k the limit of b(). Clearly if ! (L) < ! (t) then also b() tends to zero. Consequently the
only interesting ase is when ! (t) = ! (L). We shall show next that this an happen with
probability zero. Again using Bulinskaya lemma we have that for any > 0
P (!(s) !(L) = 0; !_ (s) = 0 for any s 2 [0; L ) = 0:
21

By taking a sequen e of

## P (!(s) !(L) = 0; !_ (s) = 0 for any s 2 [0; L)) = 0:

and hen e with probability one we have
Finally let us onsider the ase

(22)

## !(L) < !(t).

T (!) = L, then !(L) > !(s) for all s 2 [0; L) and hen e

## lim 1f!(0)hg1fT =0g (!) = 1f!(0)hg1fT =0g(!);

lim 1
1
(!) = 1f!(L)hg1fT =Lg(!);
!0 f!(0)hg fT =Lg
!0

## and (II) follows.

We turn to (III). For

Z (!).

## = !(L) > !(0). Then from (22) it follows that

!(L) > !(s) for all s 2 [0; L) and hen e T (!) = T (!) while Z(!) = Z (!). Finally, we
onsider ! su h that ! (t) = Z (! ) > ! (0) for some t 2 (0; L). (From the previous ase we
have that a.s. ! (t) > ! (L).) Now for a xed > 0, by a minor modi ation of the proof of
Theorem 3.2.1 in Adler (1981) (with eld X (s; t) = ! (t)
!(s)), one an show that
Next onsider

su h that

Z (!)

Now by letting

## 2 (0; L) su h that !(t) = Z (!).

Consequently t is an unique global maximum on [0; L and by Lemma 2 (T ; Z ) ! (T; Z ).
This ompletes proof of (III).
2

## Proof of Theorem 10:

We begin with statement (A); Fun tions
Fubini's theorem, the integrability of

## f(t; x; 0) is implied by (15).

22

Sin e

By means of

qN (t; !_ )  q(t; !_ ),

and

## qN (L; !)  1fT =Lg, are de reasing sequen es of random variables,

then by (16)

f(t; x; N ) f (t; x)  0
onverges to zero as

tends to innity.

xed

Now if a sample

## s 2 [0; L su h that !_ (t) = !_ (s) and ! (s) = 0 is at most one.

s 2 [0; L su h that !(t) = !(s) and !_ (t) = !_ (s) is one.

Further, if, for

s2[0;L

(23)

q (0; !) = 1 ,

and for

t=L

q (L; !) = 1 ,

s2[0;L
^

s2[0;L

(24)

## !(s)  !(L) or !_ (s)  :

(25)

(The proof of statements (23-25) involves elementary manipulations of (17-19) and hen e is
omitted.) Consequently with probability one

lim (qN (t; !_ ); qN (0; !)qN (L; !)) = (q(t; !_ ); 1fT (!)=0g; 1fT (!)=Lg):

N !1

(26)

f
N !1 
for almost all

23