0 Up votes0 Down votes

8 views4 pagesMay 08, 2014

© © All Rights Reserved

PDF, TXT or read online from Scribd

© All Rights Reserved

8 views

© All Rights Reserved

- Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
- Hidden Figures Young Readers' Edition
- The Law of Explosive Growth: Lesson 20 from The 21 Irrefutable Laws of Leadership
- The E-Myth Revisited: Why Most Small Businesses Don't Work and
- The Wright Brothers
- The Power of Discipline: 7 Ways it Can Change Your Life
- The Other Einstein: A Novel
- The Kiss Quotient: A Novel
- State of Fear
- State of Fear
- The 10X Rule: The Only Difference Between Success and Failure
- Being Wrong: Adventures in the Margin of Error
- Algorithms to Live By: The Computer Science of Human Decisions
- The Black Swan
- Prince Caspian
- The Art of Thinking Clearly
- A Mind for Numbers: How to Excel at Math and Science Even If You Flunked Algebra
- The Last Battle
- The 6th Extinction
- HBR's 10 Must Reads on Strategy (including featured article "What Is Strategy?" by Michael E. Porter)

You are on page 1of 4

1 Residual lifetime process of a Poisson process

(a) Each sample path of R is a strictly positive function on [0, ) that decreases with slope -1 in

between upward jumps which happen just as the function would otherwise reach zero.

R

t

t

Recall that one characterization of a Poisson process N with rate is that the times between suc-

cessive jumps, U

1

, U

2

, are independent and have the exponential distribution with parameter

. The initial state R

0

is equal to U

1

, and the successive upward jumps of R have sizes U

2

, U

3

, . . .

Thus, R

0

and the sizes of the upward jumps are mutually independent. To see why R is a Markov

process, x t 0; think of t as the present time. The past (R

s

: 0 s t) is determined by the

initial state R

0

and the sizes of the jumps of R that occur during (0, t]. The future (R

s

: s t)

is determined by the present state R

t

and the sizes of future jumps. Since the sizes of the future

jumps are independent of the past, the future given R

t

is conditionally independent of the past.

(b) For c 0, P{R

t

> c} = P{N

t+c

N

t

= 0} = e

c

, because N

t+c

N

t

is a Poisson random

variable with mean c. Thus, R

t

has the exponential distribution with parameter . This distribu-

tion is the same for all t, so it is the equilibrium distribution of R.

(c) Let t 0, x > 0, > 0, and consider the conditional distribution of R

t+

given R

t

= x. On

one hand, if < x then R

t+

= x with conditional probability one. That is, if < x, the

conditional distribution is concentrated on the single point x. On the other hand, given R

t

= x,

the process starts over at time t + x, so that if x, the conditional distribution of R

t+

given

R

t

= x is the same as the unconditional distribution of R

x

, which by part (b) is the exponential

distribution with parameter . One way to summarize this is to use a function, letting f(y, |x)

denote the conditional density of R

t+

given R

t

= x, and writing:

f(y, |x) =

_

(y (x )) if < x

e

y

for y 0 if x.

(d) R is not continuous in the a.s. sample-path sense because the sample paths have jumps. The

jump times of R are the same as the jump times of N; the time of the nth jump has the gamma

density with parameters n and . Thus, the probability there is a jump of R at any xed time

is zero, so R is a.s. continuous at each t. It follows that R is also continuous in the p. and d.

senses. We claim also that R is continuous in the m.s. sense. Here is a proof based on Proposition

2.1.13(c). Fix some time t; it suces to show that R is m.s. continuous at t. Let (t

n

: n 1) be

a sequence converging to t. Without loss of generality, suppose |t

n

t| 1 for all n. As already

discussed, R

tn

R

t

in p. as n . Let Y = R

t+1

+2. Since R

t+1

has the exponential distribution

1

with parameter , it follows that E[Y

2

] < . Also, due to the structure of the sample paths of R,

|R

t

| Y for all t such that |t t| 1. Thus, |R

tn

| Y for all n. Therefore, R

tn

R

t

in the m.s.

sense.

2 Prediction using a derivative

The derivative process X

R

X,X

() = R

X

() =

2

(1 +

2

)

2

R

X

() = R

X

() =

2 6

2

(1 +

2

)

3

.

By the formulas for

E and the resulting MSE,

E[X

|X

0

] =

R

X,X

()X

0

R

X

(0)

=

X

0

(1 +

2

)

2

MSE = R

X

(0)

R

X,X

()

2

R

X

(0)

= 1

2

2

(1 +

2

)

4

,

which is minimized at =

1

3

0.5777.

3 Prediction of future integral of a Gaussian Markov process

(a) Since J is the integral of a Gaussian process, J has a Gaussian distribution. It remains to nd

the mean and variance. E[J] =

_

0

e

t

E[X

t

]dt = 0.

Var(J) = E[J

2

] =

_

0

_

0

e

s

e

t

e

|st|

dsdt

= 2

_

0

_

t

0

e

s

e

t

e

(st)

dsdt

= 2

2

_

0

e

(+)t

_

t

0

e

()s

dsdt

=

2

2

_

0

e

(+)t

_

t

0

_

1 e

()t

_

dt

=

2

2

_

1

+

1

2

_

=

+

(The above gives the correct answer even if = ; can check directly or by continuity.)

(b) Since X

0

and J are jointly Gaussian, we can use the standard formulas for conditional expec-

tation and MSE. Both J and X

0

have mean zero,

Cov(J.X

0

) =

_

0

e

t

e

|t0|

dt =

+

,

and Var(X

0

) = R

X

(0) = 1. So E[J|X

0

] =

E[J|X

0

] =

X

0

+

and MSE =

+

_

+

_

2

=

(+)

2

.

2

4 A two-state stationary Markov process

(a) Since E[X

t

] = 0 for each t and since X

t

takes values in {1, 1}, the distribution of X

t

for any

t must be

i

= 0.5 for i {1, 1}. Solving the Kolmogorov forward equations (see Example 4.9.3)

yields the distribution of the process at any time t 0 for any given initial distribution:

(t) = (0)e

2t

+ (0.5, 0.5)(1 e

2t

)

So the transition probability functions are given by:

p

ij

() =

_

0.5(1 +e

2

) if i = j

0.5(1 e

2

) if i = j

so that for 0,

R

X

() = E[X()X(0)] =

i{1,1}

j{1,1}

P{X(0) = i, X() = j}ij =

i{1,1}

j{1,1}

i

p

i,j

()ij

=

1

4

_

(1 +e

2

) + (1 +e

2

) (1 e

2

) (1 e

2

)

= e

2

.

Hence for all , R

X

() = e

2||

.

(b) For all , because R

X

is continuous.

(c) For = 0, because R

X

is twice continuously dierentiable in a neighborhood of zero if and only

if = 0.

(d) For > 0, because lim

R

X

() = 0 for > 0 and lim

R

X

() = 0 if = 0.

5 Some Fourier series representations

(a) The coordinates of f are given by c

i

= (f,

i

) for i 1. Integrating, we nd c

1

=

T

2

. For

k 1, we use integration by parts to obtain: c

2k

= 0 and c

2k+1

=

2T

2k

.

(b) The best N eigenfunctions to use are the ones with the largest magnitude coordinates. Thus,

f

(N)

(t) =

T

2

1

(t)

N1

k=1

2T

2k

2k+1

(t).

We nd ||f||

2

=

_

T

0

|f(t)|

2

dt =

T

3

(and we can check that

i=1

c

2

i

=

T

3

too.) Now

||f f

(N)

||

2

=

k=N

c

2

2k+1

=

T

2

2

k=N

1

k

2

so

||f f

(N)

||

2

||f||

2

=

3

2

2

k=N

1

k

2

3

2

2

_

N1

1

x

2

dx =

3

2

2

(N 1)

0.01

if N 1 +

3

2

2

(0.01)

= 16.2, so N = 17 suces.

(c) Without loss of generality, we assume the parameter

2

of the Brownian motion is one. The

N-dimensional random process closest to W in the mean squared L

2

norm sense is obtained by

using the N terms of the KL expansion for N with the largest eigenvalues. Note E[||W||

2

] =

3

E[

_

T

0

W

2

t

dt] =

_

T

0

tdt =

T

2

2

. The eigenvalues for the KL expansion of W are given by

n

=

4T

2

(2n+1)

2

2

for n 0. Thus,

W

(N)

(t) =

N1

n=0

(W,

n

)

n

.

E[||W W

(N)

||

2

] =

n=N

n

=

4T

2

n=N

1

(2n + 1)

2

so

E[||W W

(N)

||

2

]

E[||W||

2

]

=

8

n=N

1

(2n + 1)

2

4

2

_

2N

1

x

2

dx =

2

2

N

0.01

if N

2

(0.01)

2

= 20.26, so N = 21 suces.

6 First order dierential equation driven by Gaussian white noise

(a) Since

N

0 it follows that

X

(t) = x

0

e

t

.The covariance function of X is given by:

C

X

(s, t) =

_

s

0

_

t

0

e

(su)

e

(tv)

2

(u v)dvdu

=

_

s

0

e

(su)

e

(tu)

2

du =

2

e

st

_

s

0

e

2u

du

=

2

2

_

e

st

e

st

_

By the symmetry of C

X

, it is given in general by

C

X

(s, t) =

2

2

_

e

|ts|

e

ts

_

(b) Let r < s < t. It must be checked that

C

X

(r, s)C

X

(s, t)

C

X

(s, s)

= C

X

(v, t)

or (e

rs

e

rs

)(e

st

e

st

) = (e

rt

e

rt

)(1 e

2s

) which is easily done.

(c) As t ,

X

(t) 0 and C

X

(t +, t)

2

2

e

||

.

4

- Guia Definitivo ObraUploaded byMarcus Vinícius de Paiva
- Probability Assignment 5Uploaded byVijay Kumar
- thesis2Uploaded bysakthistat
- TEXTO TRADUCIRUploaded byPedro Falcón
- Btech-Strucuture-23-2-15-1Uploaded byteju1996cool
- Lecture 7Uploaded byAnonymous T02GVGzB
- Max A. Little et al- Steps and Bumps: Precision Extraction of Discrete States of Molecular MachinesUploaded byGmso3
- Fluid Flow Models and QueuesUploaded byDinesh Zanwar
- Study of the Barrier Probability in a Classical Risk Model for two different claim amount distributions with same MeanUploaded byInternational Journal for Scientific Research and Development - IJSRD
- 6. Maths - IJAMSS - A Markovian Single Service - SridharUploaded byiaset123
- MarkovUploaded byAhmad Dbouk
- KnillUploaded byCo
- The Importance Analysis of Use Case Map with Markov ChainsUploaded byijcsis
- Probability R,VUploaded byYehya Mesalam
- Life Time PredictionUploaded byOlumide Ogunwale-Olawale
- Lecture 45.pptUploaded byAnissa Negra Akrout
- Random WalksUploaded byJohn Patterson
- 1-s2.0-S2211381911001822-mainUploaded byDivyanshu Surawat
- Powerpoint OLPUploaded byRay Lewis
- Assignment 1Uploaded byshisha92
- MDP-7-Sep-17Uploaded byAnonymous LI2DAcv
- ConvergenceUploaded byEdmond Z
- Ch7 Markov Chains Part IIUploaded bySerge Mandhev Swamotz
- Solutions Lectures1 2Uploaded byHelbert Agluba Paat
- sdlpointUploaded byfrolloos
- ALPSTEN-PricingBermudaSwaptionsUploaded byLameune
- Tutorial Survival Analisis Di RUploaded byandriana
- Estadística IntroUploaded byCristianRivera
- hwkansUploaded byKirby Ng
- Advanced Financial ModelsUploaded byBlagoje Aksiom Trmcic

- sol7Uploaded byThinh
- sol5Uploaded byThinh
- sol4Uploaded byThinh
- sol3Uploaded byThinh
- sol2Uploaded byThinh
- FinalUploaded byThinh
- sol1Uploaded byThinh
- Quiz SolUploaded byThinh
- exam2solUploaded byThinh
- QuizUploaded byThinh
- ps7Uploaded byThinh
- ps6Uploaded byThinh
- ps5Uploaded byThinh
- ps4Uploaded byThinh
- ps3Uploaded byThinh
- ps2Uploaded byThinh
- ps1Uploaded byThinh
- Final SolUploaded byThinh
- exam2Uploaded byThinh
- exam1Uploaded byThinh
- exam1solUploaded byThinh
- 534 CoversUploaded byThinh
- Minimax OptimizationUploaded byThinh

- Guass' lawUploaded bySaadoon Minhas
- The politics of narrating social entrepreneurship.pdfUploaded byPhegasus92
- PEMA2_2Uploaded bycamelod555
- 2008 EAMTA_CDM ESD Protection in CMOS Integrated CircuitsUploaded byEugine Paul
- Grade 1 E-CLASS RECORD K-12 CURRICULUMUploaded byAbbie HT Laquiao
- Bankmaster TrainingUploaded byteddycbe1
- An R package for high-frequency tradersUploaded byFabianMontescu
- DIN EN 16681Uploaded byxxx
- Autocad MechanicalUploaded byEdisson L. Ypanaque
- world history syllabus - ryan hattier (shiloh christian school)Uploaded byapi-292286493
- Review Questions Materials g LucasUploaded byNozzre Villanueva Lucas
- ID, Ego and SuperegoUploaded byManthan
- Language for Professional CommunicationUploaded byIordache Loredana
- Poweredge t105 SystemsUploaded byJoe Watson
- Appsense Application MangerUploaded bysudharaghavan
- 1326.fullUploaded byAmeen Shaikh
- 351TexttSUploaded byPAGAL LADKI
- ENGLISH ASSIGNMENT.pptxUploaded byNiharika Zirvi
- Resultant T2 SolvedUploaded byAnand Balaji
- Chemical Process Modelling and Computer Simulation by Amiya k JanaUploaded byAnonymous lgTWMcvjv
- Top 30 BusinessObjects Interview Questions (BO) With AnswersUploaded byanildudam
- Contoh ulangan harian IPS kelas 6Uploaded bydlintriana
- POL214 - Winter SyllabusUploaded byaaaaanks
- HANDOUTS ON ATMOSPHERE AND LITHOSPHERE ou.docxUploaded byMiraquel Añonuevo Labog
- Rapid Entire Body Assessment (REBA)Uploaded byJuwairiah
- Shell Script KanetkarUploaded byBhushan Bhupta
- List of Books JSS 1-3Uploaded byKunbi Santos-Arinze
- Metamorphic P–T evolution of garnet–kyanite–staurolite schist and garnet amphibolite from Bodonch area, western Mongolian Altai: Geothermobarometry and mineral equilibrium modelingUploaded byFelipe Ignacio Salazar Cruz
- 2-75-eng-fr[1]Uploaded byShirley Hurtubise
- sadie franklin resumeUploaded byapi-261739327

## Much more than documents.

Discover everything Scribd has to offer, including books and audiobooks from major publishers.

Cancel anytime.